How to Speed Up the Build and Run Process in Unity for Mobile Devices iOS/Android

How to speed up the build and run process in unity for mobile devices iOS/Android

Unity Remote 5 is designed to do. It reduces the amount of time you deploy your app to your iOS or Android device during development. You can get the iOS version here and the Android version here.

It supports the following sensors:

  • Touch
  • Accelerometer
  • Gyroscope
  • Webcam
  • Screen orientation change events

The latest version which is 5.0 added support for the following:

Android:

  • Gamepads connected to the remote device
  • Compass and location data(GPS)

iOS:

  • MFi gamepads connected to the remote device (requires Unity 5.4)
  • 3D Touch and Apple Pencil support
  • Apple TV devices
  • Compass and location data(GPS)

When you download the app linked above, connect your iOS device to your computer, open the app.

From Unity Editor, go to
Edit->Project Settings->Editor then chose your device(iOS) from the device drop down menu. Click play and you will be able to test your touch functions from the Editor.

Even with this App, it is recommended to test your app on the actual device once in while. So at-least, do this once in a day to make sure everything is working as it should.

Unity3d performance on iPhone

There are specific resources in Unity that will help with mobile development including resources, shaders, etc. that are specifically designed with mobile in mind.

You certainly won't want to take 'unoptimized' PC-quality assets and drop them into a Unity project and export that for the iOS platform as you will ensure poor/unreliable performance. What you want to do is start building out a scene using assets of similar quality to those you want for your game and then seeing what the performance is on a real device. This will give you a feel for what level of performance you can expect from your game in production.

Remember that the performance of a iPhone, iPad, iPad2, etc will vary wildly depending on what you're doing and which features you're touching. While Unity3D has been heavily optimized to deal with a variety of the scenarios, you can certainly do things like fogging which push the fill rate (a known limitation of the platform) and end up with horrendous performance.

Could you possibly get more performance out of building your application purely in Objective-C? If you have the right skillset in engine development to design a specific implementation of technology for your specific requirements - Certainly.

You just need to decide if you want to spend your time writing technology or building product. Most people who choose Unity do so because you get an exceptionally good engine which most people cannot beat the performance of (try building your own landscape engine), while at the same time getting exceptional time to market... and really its time to market that really matters in most cases.

Unity remote 4 and 5 Not working on Oppo Mirror 5, Samsung Galaxy S7Edge?

Yes as it say in comment, the solution for me is make sure your usb driver android phone is detect by the computer. Then make sure again run unity remote first before run Unity.

How to detect click/touch events on UI and GameObjects

You don't use the Input API for the new UI. You subscribe to UI events or implement interface depending on the event.

These are the proper ways to detect events on the new UI components:

1.Image, RawImage and Text Components:

Implement the needed interface and override its function. The example below implements the most used events.

using UnityEngine.EventSystems;

public class ClickDetector : MonoBehaviour, IPointerDownHandler, IPointerClickHandler,
IPointerUpHandler, IPointerExitHandler, IPointerEnterHandler,
IBeginDragHandler, IDragHandler, IEndDragHandler
{
public void OnBeginDrag(PointerEventData eventData)
{
Debug.Log("Drag Begin");
}

public void OnDrag(PointerEventData eventData)
{
Debug.Log("Dragging");
}

public void OnEndDrag(PointerEventData eventData)
{
Debug.Log("Drag Ended");
}

public void OnPointerClick(PointerEventData eventData)
{
Debug.Log("Clicked: " + eventData.pointerCurrentRaycast.gameObject.name);
}

public void OnPointerDown(PointerEventData eventData)
{
Debug.Log("Mouse Down: " + eventData.pointerCurrentRaycast.gameObject.name);
}

public void OnPointerEnter(PointerEventData eventData)
{
Debug.Log("Mouse Enter");
}

public void OnPointerExit(PointerEventData eventData)
{
Debug.Log("Mouse Exit");
}

public void OnPointerUp(PointerEventData eventData)
{
Debug.Log("Mouse Up");
}
}

2.Button Component:

You use events to register to Button clicks:

public class ButtonClickDetector : MonoBehaviour
{
public Button button1;
public Button button2;
public Button button3;

void OnEnable()
{
//Register Button Events
button1.onClick.AddListener(() => buttonCallBack(button1));
button2.onClick.AddListener(() => buttonCallBack(button2));
button3.onClick.AddListener(() => buttonCallBack(button3));

}

private void buttonCallBack(Button buttonPressed)
{
if (buttonPressed == button1)
{
//Your code for button 1
Debug.Log("Clicked: " + button1.name);
}

if (buttonPressed == button2)
{
//Your code for button 2
Debug.Log("Clicked: " + button2.name);
}

if (buttonPressed == button3)
{
//Your code for button 3
Debug.Log("Clicked: " + button3.name);
}
}

void OnDisable()
{
//Un-Register Button Events
button1.onClick.RemoveAllListeners();
button2.onClick.RemoveAllListeners();
button3.onClick.RemoveAllListeners();
}
}

If you are detecting something other than Button Click on the Button then use method 1. For example, Button down and not Button Click, use IPointerDownHandler and its OnPointerDown function from method 1.

3.InputField Component:

You use events to register to register for InputField submit:

public InputField inputField;

void OnEnable()
{
//Register InputField Events
inputField.onEndEdit.AddListener(delegate { inputEndEdit(); });
inputField.onValueChanged.AddListener(delegate { inputValueChanged(); });
}

//Called when Input is submitted
private void inputEndEdit()
{
Debug.Log("Input Submitted");
}

//Called when Input changes
private void inputValueChanged()
{
Debug.Log("Input Changed");
}

void OnDisable()
{
//Un-Register InputField Events
inputField.onEndEdit.RemoveAllListeners();
inputField.onValueChanged.RemoveAllListeners();
}

4.Slider Component:

To detect when slider value changes during drag:

public Slider slider;

void OnEnable()
{
//Subscribe to the Slider Click event
slider.onValueChanged.AddListener(delegate { sliderCallBack(slider.value); });
}

//Will be called when Slider changes
void sliderCallBack(float value)
{
Debug.Log("Slider Changed: " + value);
}

void OnDisable()
{
//Un-Subscribe To Slider Event
slider.onValueChanged.RemoveListener(delegate { sliderCallBack(slider.value); });
}

For other events, use Method 1.

5.Dropdown Component

public Dropdown dropdown;
void OnEnable()
{
//Register to onValueChanged Events

//Callback with parameter
dropdown.onValueChanged.AddListener(delegate { callBack(); });

//Callback without parameter
dropdown.onValueChanged.AddListener(callBackWithParameter);
}

void OnDisable()
{
//Un-Register from onValueChanged Events
dropdown.onValueChanged.RemoveAllListeners();
}

void callBack()
{

}

void callBackWithParameter(int value)
{

}

NON-UI OBJECTS:

6.For 3D Object (Mesh Renderer/any 3D Collider)

Add PhysicsRaycaster to the Camera then use any of the events from Method 1.

The code below will automatically add PhysicsRaycaster to the main Camera.

public class MeshDetector : MonoBehaviour, IPointerDownHandler
{
void Start()
{
addPhysicsRaycaster();
}

void addPhysicsRaycaster()
{
PhysicsRaycaster physicsRaycaster = GameObject.FindObjectOfType<PhysicsRaycaster>();
if (physicsRaycaster == null)
{
Camera.main.gameObject.AddComponent<PhysicsRaycaster>();
}
}

public void OnPointerDown(PointerEventData eventData)
{
Debug.Log("Clicked: " + eventData.pointerCurrentRaycast.gameObject.name);
}

//Implement Other Events from Method 1
}

7.For 2D Object (Sprite Renderer/any 2D Collider)

Add Physics2DRaycaster to the Camera then use any of the events from Method 1.

The code below will automatically add Physics2DRaycaster to the main Camera.

public class SpriteDetector : MonoBehaviour, IPointerDownHandler
{
void Start()
{
addPhysics2DRaycaster();
}

void addPhysics2DRaycaster()
{
Physics2DRaycaster physicsRaycaster = GameObject.FindObjectOfType<Physics2DRaycaster>();
if (physicsRaycaster == null)
{
Camera.main.gameObject.AddComponent<Physics2DRaycaster>();
}
}

public void OnPointerDown(PointerEventData eventData)
{
Debug.Log("Clicked: " + eventData.pointerCurrentRaycast.gameObject.name);
}

//Implement Other Events from Method 1
}

Troubleshooting the EventSystem:

No clicks detected on UI, 2D Objects (Sprite Renderer/any 2D Collider) and 3D Objects (Mesh Renderer/any 3D Collider):

A.Check that you have EventSystem. Without EventSystem it can't detect clicks at-all. If you don't have have it, create it yourself.


Go to GameObject ---> UI ---> Event System. This will create an EventSystem if it doesn't exist yet. If it already exist, Unity will just ignore it.


B.The UI component or GameObject with the UI component must be under a Canvas. It means that a Canvas must be the parent of the UI component. Without this, EventSystem will not function and clicks will not be detected.

This only applies to UI Objects. It doesn't apply to 2D (Sprite Renderer/any 2D Collider) or 3D Objects (Mesh Renderer/any 3D Collider).


C.If this is a 3D Object, PhysicsRaycaster is not attached to the camera. Make sure that PhysicsRaycaster is attached to the camera. See #6 above for more information.


D.If this is a 2D Object, Physics2DRaycaster is not attached to the camera. Make sure that Physics2DRaycaster is attached to the camera. See #7 above for more information.


E.If this is a UI object you want to detect clicks on with the interface functions such as OnBeginDrag, OnPointerClick, OnPointerEnter and other functions mentioned in #1 then the script with the detection code must be attached to that UI Object you want to detect click on.


F.Also, if this is a UI Object you want to detect clicks on, make sure that no other UI Object is in front of it. If there is another UI in front of the one you want to detect click on, it will be blocking that click.

To verify that this is not the issue, disable every object under the Canvas except the one you want to detect click on then see if clicking it works.

Unity3D: Timer created at TouchEnded not working


Problem: StartTimer not starting when user clicks screen, touch not
being registered so timer never starts.Im using a mouse to test but
will deploy for mobile

That's the main issue here. Of-course you still have some logical error in your code but timer shouldn't start since you're using mouse to test the Input.touchCount and Input.GetTouch API. These two API only work on the mobile device since they use touch instead of mouse.

If you want to use them in the Editor, use Unity Remote 5. Download it, on your mobile device, enable it in the Editor by going to Edit->Project Settings->Editor then connect your device to your computer. You should be able to use the touch API in your Editor without having to build it. See this post for more information about Unity Remote 5.


If you want the code to be more compatible with both mouse and touch then make a simple function that wraps around the touch and mouse API. You can then use the combination of UNITY_STANDALONE and UNITY_EDITOR preprocessor directives to detect when you are not running on a mobile platform.

Here is a simple mobile/desktop touch wrapper that works without Unity Remote 5:

bool ScreenTouched(out TouchPhase touchPhase)
{
#if UNITY_STANDALONE || UNITY_EDITOR
//DESKTOP COMPUTERS
if (Input.GetMouseButtonDown(0))
{
touchPhase = TouchPhase.Began;
return true;
}

if (Input.GetMouseButtonUp(0))
{
touchPhase = TouchPhase.Ended;
return true;
}
touchPhase = TouchPhase.Canceled;
return false;
#else

//MOBILE DEVICES
if (Input.touchCount > 0)
{
touchPhase = Input.GetTouch(0).phase;
return true;
}
touchPhase = TouchPhase.Canceled;
return false;
#endif
}

Below is how to use it to get what you want in your question:

public float uiStartTime = 0;
public float uiMaxTime = 15f;
private bool timerRunning = false;

private void Update()
{
//Increment timer if it's running
if (timerRunning)
uiStartTime += Time.deltaTime;

TouchPhase touchPhase;

if (ScreenTouched(out touchPhase))
{
//Check for keypres and start timer if it's not running
if (touchPhase == TouchPhase.Ended && !timerRunning)
{
timerRunning = true;
}

//If the user touches the screen before the 15 seconds, then restart the timer.
else if (touchPhase == TouchPhase.Ended && timerRunning && uiStartTime < uiMaxTime)
{
uiStartTime = 0f;
Debug.Log("Timer Reset");
}
}

//If the user hasn't touched the screen again and it has been 15 seconds then do something.
if (uiStartTime >= uiMaxTime)
{
// Do Something
Debug.Log("Timer not touched for 15 seconds");
}
}

How does Unity distinguish between Android and iOS

There basically two sides of Unity which are the managed and native side. The managed part is just the C# API and the native side is just the native side of the code written in C++ that communicates with Object-C Swift for iOS or Java for Android to access.

The managed part extension is usually .dll and the native side for Android are usually .so, .aar, .jar or just a .dex.

The managed part extension for iOS native side are usually .a, .m, .mm, .c, .cpp. For iOS, some manage part are not compiled and still in form of .cpp until you build the generated project with xCode.

This code works on Andoid and on iOS. Im wondering, how does Unity
know if we're running the game on Android or iOS?

They don't do this during run-time. There are so many Unity API and doing this every time would be redundant and messy.

To simplify this, it uses different managed and native files for different platforms. These files are included during build-time and this decision is made in the Editor once you click the build button.

First, go to <UnityInstallationDirecory>\Editor\Data\PlaybackEngines

Sample Image

To avoid using some preprocessor directives, Application.platform or mising different platforms codes, this is done in the Editor during build time. If you select iOS and build the project, it will include UnityEngine.dll for iOS located in the C:\Program <UnityInstallationDirecory>\Editor\Data\PlaybackEngines\iOSSupport path. It will do the-same for Android but in the <UnityInstallationDirecory>\Editor\Data\PlaybackEngines\AndroidPlayer path.

There are different UnityEngine.dll for each platform. You can see them in the path I mentioned above. This UnityEngine.dll and other managed dlls is where you will see a call to the native side of the code with DllImport or with AndroidJavaClass. Note that UnityEngine.dll is not the only file included in that build. Most files in these paths, with the extensions I mentioned above are included. These includes the managed and native files.

How to force landscape mode when using mobile browser (Unity webgl)?

You can try with something like this:

lockAllowed = window.screen.lockOrientation(orientation);

You can find more information here:
https://developer.mozilla.org/en-US/docs/Web/API/Screen/lockOrientation

on chrome something like this should work

var lockFunction =  window.screen.orientation.lock;
if (lockFunction.call(window.screen.orientation, 'landscape')) {
console.log('Orientation locked')
} else {
console.error('There was a problem in locking the orientation')
}

basically you only need to specify what orientation you need (landscape in your case).
This is a solution that I'm not sure will work on mobile.

So for mobile you can also try to create a manifest.json

<link rel="manifest" href="http://yoursite.com/manifest.json">

{
"name":"A nice title for your web app",
"display":"standalone",
"orientation":"landscape"
}

A unity only solution can be to just rotate everything based on the x and y of the screen (by using canvas rect) so that you can rotate when x > y and rotate again when that change (the user should only see landscape this way).



Related Topics



Leave a reply



Submit