Implementing touch with Input System's Enhanced Touch API
As with anything that gives you deep control, there are several ways gather touch input. Rather than using the Input Action asset setup, which is great for multiple input device support, you’ll use the EnhancedTouchSupport API. This API is useful if you have many touch sources. To start, look at how you can get useful information on the touch actions taken.
Open the script InputManager
in your favorite code editor. Add these
using
statements right at the top of the file:
using UnityEngine.InputSystem.EnhancedTouch;
using Touch = UnityEngine.InputSystem.EnhancedTouch.Touch;
Then add an Awake
method to enable Enhanced Touch Support:
private void Awake()
{
EnhancedTouchSupport.Enable();
}
Finally, add the following code to Update
to start tracking touch actions:
if (Touch.activeFingers.Count == 1)
{
Touch activeTouch = Touch.activeFingers[0].currentTouch;
Debug.Log($"Phase: {activeTouch.phase} | Position: {activeTouch.startScreenPosition}");
}
Save your changes and return to the Unity editor.
Before running the project, take a moment to understand the properties available from a Touch:
A finger is active if it’s currently touching the screen. You can access all active fingers by looping through the Touch.activeFingers
array. Each active finger has a currentTouch
property. The currentTouch
property gives you detailed information on the touch action that occurred.
You can filter on the count or index of a finger. By limiting the count
of activeFingers to one, you ensure Debug.Log
only executes when one finger touches the screen.
If you have a touchscreen monitor, you can run your project in the editor. Otherwise, build and run to a mobile device. Try touching multiple fingers on the screen and notice no debug logs print in the console for them.
There are several useful properties associated with each Touch via the TouchControl
type.
Phases give you a high-level understanding of what the input system believes happened. A touch action can have six phases:
Here’s a diagram showing the phases for two common workflows:
Three key properties help determine where the finger is currently touching and how much it’s moved since the last frame:
startScreenPosition
once the touch input registers a movement.With this in mind, it’s time to focus on the camera setup.