Coder Social home page Coder Social logo

input-device-capabilities's Introduction

Input Device Capabilities

This repository contains a proposed specification for an API that provides capability details of the underlying device that generated a DOM input event. In particular, this enables scripts to reliably identify MouseEvents dervied from TouchEvents.

This repo also contains a a polyfill, and some tests. This API first shipped in Chrome 47.

If this API is successfully (eg. shipped in multiple browsers) then it will hopefully be transitioned out of incubation and into the W3C UIEvents specification as maintained by the W3C Web Platform Working Group.

References

input-device-capabilities's People

Contributors

foolip avatar lanwei22 avatar marcoscaceres avatar mathiasbynens avatar patrickhlauke avatar rbyers avatar saschanaz avatar sideshowbarker avatar sidvishnoi avatar tdresser avatar tidoust avatar travisleithead avatar yoavweiss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

input-device-capabilities's Issues

Expand introduction to better describe problem/use case

DOM input events are an abstraction above low-level input events, only loosely tied to physical device input (e.g. ‘click’ events can be fired by a mouse, touchscreen or keyboard). There is no mechanism to obtain lower-level details about the physical device responsible for an event. This leads to problems, as developers may need to make assumptions or rely on heuristics. For example, when supporting both mouse and touch input, it's difficult to know whether a mousedown event represents new input from a mouse, or a compatibility event for a previously processed touchstart event.

I think the first part of this could do some additional text to introduce the idea of compat events.

DOM input events are an abstraction above low-level input events, only loosely tied to physical device input (e.g. ‘click’ events can be fired by a mouse, touchscreen or keyboard). There is no mechanism to obtain lower-level details about the physical device responsible for an event. Depending on implementation, certain types of input can also generate further "fake" DOM input events for compatibility reasons. For example, touchscreen interactions not only fire touch events, but also compatibility mouse events; when supporting both mouse and touch input, it's difficult to know whether a mousedown event represents new input from a mouse, or a compatibility event for a previously processed touchstart event. This leads to problems, as developers may need to make assumptions or rely on heuristics.

...or something like that?

Suggest adding a `isCoarse` property?

Similar to issue Suggest adding a firesHoverEvents property

Suggests that a property isCoarse boolean is added to allow accurate handling of different pointer-based input devices.
There are a number of benefits that this allows for by allowing to changing the behaviour based on the support for hover for user interaction and better accessibility handling.

The current methods to detect hover support are not always accurate:

  1. Using CSS media queries: The cross-browser support and reliability of those are mixed. A negative query may indicate no support for the query or the current device doesn't support hover.
  2. The PointerEvent.pointerType can help by making assumptions for "mouse" (precise input) and "touch" (coarse input), however "pen" may or may not support hover depending on the type of pen in-use.
  3. Assuming that fireTouchEvents!==true means that the input is precise.

Example scenarios:

  • Different forms of pen inputs have very different capabilities.
  • Laptop with mouse and trackpad but coarse input due to accessibility.

Suggest adding an event for `PointerInputDeviceChanges`

An event to detect the capabilities of a pointer device have changed when the user switches between devices.
Consider a Touchscreen laptop with pen support, a mouse and a trackpad; A user may switch between these frequently.
Naming could be 'onPointerDeviceChangeoronPointerChange`.

Currently, to detect the capabilities of a device the code needs to listen to one of two types of pointer-events:

  • One that fires frequently, e.g.pointerMove. Too frequent, requires additional overhead to ignore repeated calls for the same input type.
  • One that fires at the point of interaction, e.g. 'pointerdown`. May be detected too late to switch input affordance changes.

It is unclear if events such as pointerover or pointerenter will refire if a user were to switch between mouse, trackpad or touch.

Should init*Event methods clear the sourceDevice?

How should this code behave?

var e = new UIEvent("test", {sourceDevice: new InputDevice()});
e.initUIEvent("test2", false, false, null, 0);
console.log(e.sourceDevice);

Although the init*Event methods are deprecated, we should probably still define (and test) this behavior. It's a pretty minor edge case though, unlikely to come up in practice I think.

Enumerate devices

From the beginning of the brainstorming for this API there was discussion about eventually supporting device enumeration. It looks like we didn't have a GitHub issue to track that though.

My initial naive proposal was to have a simple enumeration method:

partial interface InputDeviceCapabilities {
  // Return all of the (unique) input devices.
  static sequence<InputDeviceCapabilities> getAllDeviceCapabilities();
}

What to do about scroll events?

Scroll events don't technically represent a single input (eg. multiple devices may scroll the same element at the same time - what sourceDevice would we want then?). Should we spec a particular behavior? Should we have any tests?

Interaction with IME

This is spun off from my comment in the doc.
Please ignore if the main motivation for the API is pointing devices,
not keyboard or text input devices.

If one of the motivations is to detect low-level input device,
e.g., an Android phone user is typing text on-screen (virtual keyboard) or
bluetooth hardware keyboard, or USB barcode reader etc.,
then ignoring the IME layer would make sense and give the script author
the information about which physical device is being used.

If the author wants to know the details of 'how this input content is generated from user input' -
then the problem becomes very complex, depending on how much detail is needed.
Even with bluetooth keyboard, IME may preprocess the raw input on Android.

A raw keypress might go through very complex path if IME is involved
(maybe go back and forth several times from browser and IME) until
the final resulting character sequence is generated and delivered to the destination web page.
How an application (browser) interacts with system's IME
varies from system to system, it's hard to abstract to an interoperable APIs.

So my gut feeling is that IME should be out of scope for this spec,
but there might be some information about IME which can be useful.
Here are my random ideas:

  • a flag that indicates whether the input is directly from the hardware input device or not
    ('raw' or 'cooked')
  • if the device input is preprocessed, what is the preprocessor (IME, autocorrect,
    gesture/handwriting recognizer)?
  • For what language/locale the preprocessor is intended for (may not be a single locale)?
  • Is there in-flight raw input data or not (similar to compositionupdate and compositionend)
    to distinguish the state is 'in progress' or 'somewhat finished'.

Add ability to detect mouse buttons are swapped

Would it make sense to indicate in the InputDeviceCapabilities for mouse that the primary and secondary buttons are swapped?

You can do this via querying GetSystemMetrics(SM_SWAPBUTTON) on Windows.

The only problem I see with using the InputDeviceCapabilities API for this is that you can't query the API dynamically for installed devices; you have to wait for a device to appear.

Suggest adding a `firesHoverEvents` property or some way to detect that in active input device supports hover?

Suggests that a firesHoverEvents boolean is added to allow accurate handling of different pointer-based input devices.
There are a number of benefits that this allows for by allowing to changing the behaviour based on the support for hover for user interaction and better accessibility handling.

The current methods to detect hover support are not always accurate:

  1. Using CSS media queries: The cross-browser support and reliability of those are mixed. A negative query may indicate no support for the query or the current device doesn't support hover.
  2. The PointerEvent.pointerType can help by making assumptions for "mouse" (hover supported) and "touch" (hover unsupported), however "pen" may or may not support hover depending on the type of pen in-use.
  3. Assuming that fireTouchEvents===true means that hover is unsupported. However, this is not officially documented and unclear if it is an accurate assumption.

Example scenarios:

  • Laptop with touchscreen, mouse and trackpad.
  • Laptop with a touchscreen with pen support, mouse and trackpad.
  • Computer with a mouse and then digitizer tablet/pen (e.g. Wacom pen tablets) added after page load.
  • Laptop with mouse and trackpad but hover disabled for accessibility.
  • Tablet with mouse added.

Additionally, clarification on which events are intended for which capabilities?
e.g. (Not accurate)

Event On Event Handler firesHoverEvents TRUE firesHoverEvents FALSE firesTouchEvents TRUE firesTouchEvents FALSE
pointerover onpointerover    
pointerenter onpointerenter
pointerdown onpointerdown
pointermove onpointermove ✓ While down ✓ While down
pointerup onpointerup
pointercancel onpointercancel
pointerout onpointerout    
pointerleave onpointerleave

Should the spec define any event-specific behavior?

There are a number of subtle implementation details for some events. Eg. does the sourceDevice of a keypress for an on-screen-keyboard represent the touchscreen or a logical keyboard device (probably the latter).

My thinking is that these are implementation details and since the API describes the behaviors you should see then it's OK if some user agents expose their subtly different behavior through this API. There's certainly a slippery slope here where we could add a lot of complexity to the spec and implementation for little actual developer value.

My preference is to wait for specific concrete use cases before considering adding any such complexity to the spec.

@tdresser WDYT?

Touch not detected properly on Windows Phone

IE fires mouse events for touch before firing any touch events. This breaks the approach used by the polyfill (looking for events that occur immediately after touch events). I could perhaps hack around this with a UA check - forcing firesTouchEvents to always be true for IE on Windows Phone, but would that always be right? Eg. does Windows Phone support bluetooth mice? Also what about desktop IE with touch event support explicitly enabled?

This is complicated enough that I'm going to leave Windows Phone unsupported for now. Perhaps @jacobrossi will submit a PR :-)

Focus events and keyboard navigation

I skimmed through #26 and the linked discussions but they seem to be focusing (har har) on displaying different styling based on keyboard vs mouse. #39 might also be relevant

I'd just like to throw out another use --

Currently, there is no good way to know if an element gained focus as a result of keyboard navigation. This makes an entire class of hover ui inaccessible. Like this github component here that you can't tab to: #26

Maybe something purpose-built higher up in focus events would be a better target than here, but I figured I'd mention it.

CompositionEvent polyfill doesn't work in Firefox: Illegal Constructor

http://rbyers.github.io/InputDevice/tests/inputdevice-script.html#usePolyfill fails on Firefox 38 with:

CompositionEvent.sourceDevice exists and can be initialized properly by script
Illegal constructor.

augmentEventConstructor/global[constructorName]@file:///usr/local/google/home/rbyers/code/InputDevice/inputdevice-polyfill.js:125:17
generateConstructorTest/<@file:///usr/local/google/home/rbyers/code/InputDevice/tests/inputdevice-script.html#usePolyfill:32:17
Test.prototype.step@http://w3c-test.org/resources/testharness.js:1371:20
test@http://w3c-test.org/resources/testharness.js:496:9
generateConstructorTest@file:///usr/local/google/home/rbyers/code/InputDevice/tests/inputdevice-script.html#usePolyfill:30:1
runTests@file:///usr/local/google/home/rbyers/code/InputDevice/tests/inputdevice-script.html#usePolyfill:59:7
runTestsAndDone@file:///usr/local/google/home/rbyers/code/InputDevice/tests/inputdevice-tests.js:7:7

This appears to be a CompositionEvent constructor bug in Mozilla and I don't see any way to feature detect for it. Bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1002256.

Add property for whether dragging the pointer scrolls

In the Pointer Events hackathon it was discussed that there's no good way to know if a user would expect a pointermove event to trigger scrolling. Eg. for a library trying to implement a custom scroller that behaves like the native OS scrollers (eg. iScroll). A naive thing to do is to trigger scrolling for pointerType="touch" only, but that won't handle the case of touch-like stylus devices (eg. Samsung SPen).

We should just add a new property to InputDeviceCapabilities that indicates whether this is a pointer where movement typically causes scrolling. For chromium this will probably always have the same value as firesTouchEvents.

/cc @scottgonzalez @jacobrossi @mustaqahmed @dtapuska @tdresser

Improve reliability of mouse event from touch interaction in polyfill

Nice polyfill. I noticed that you set lastTouchTime = Date.now() on each touch event and then check if the mouse event comes in within 1000ms of the last touch event to determine if the mouse event was from a touch interaction. This works most of the time (I used this technique in the-listener), except when the touch event starts a blocking process that takes longer than 1000ms. The mouse event will come in after the 1000ms and won't be recognized as originating from a touch interaction.

To fix this, I used a setTimeout to take advantage of the browser's multithreading capabilities.

var recentTouch = false;
var touchTimerID = null;
function touchHandler(event) {
  recentTouch = true;
  // only have one timer running at a time, otherwise recentTouch could be set to false
  // from an earlier touch while the most recent touch event's timer is still running, so
  // clear the timer when a touch event comes in before the previous event's timer ends
  if (touchTimerID !== null) window.clearTimeout(touchTimerID);
  touchTimerID =  window.setTimeout(function() {
    touchTimerID = null;
    recentTouch = false;
  // 1000ms may be longer than is needed with this technique
  }, 600)
}

And then use recentTouch to determine if a mouse event is from a touch interaction.

Note that in the case of a long blocking process from a touch event, the browser will push the corresponding mouse event onto the queue when it should be executed (anywhere from a few ms to 300+), and then push the timeout function onto the queue after the timer expires. This results in the mouse event being pushed onto the callstack and executed before the timeout function, even if it isn't executed until several seconds later, so it's recognized as a mouse event from a touch interaction.

Also, if subsequent touch events occur while the blocking process is running, the browser will push the touch events onto the queue when the touch happens, and if one of them is in queue before the previous touch event's timer expires, it will execute before the timeout's function, and, this is the key part, the call to clearTimeout(touchTimerID) will remove the timeout's function from the queue even after the timer has finished (I did some testing for this scenario when I came up with the solution because I wasn't sure, but it appears to be the case).

Hope you find this useful.

Feedback

Not sure if this will help you or not.

I had this polyfill installed on a production website from 2015 - 2018. No problems, but I recently installed "lazysizes" and your polyfill stop that library from working on IOS devices such as ipads and iphones, it worked ok on Chrome and Fire fox. So we decided to remove the polyfill and everything worked.

But I thought I'd share the feedback.

Verify the tests pass with the polyfill on a few other browsers

Only tested on Chrome Linux at the moment. Check at least the latest stable versions of:
IE on Windows
Mobile Safari on iOS (not sure if keyboard and mouse tests are possible)
Firefox on Windows or Linux (probably excluding touch test)
Safari on Mac (excluding touch test)

Add additional keyboard event input tests

There are various additional UIEvents that are supposed to (or at one time were specified to) be fired on text input:

  • input (implemented in blink only as an 'Event')
  • beforeinput (not implemented by blink)
  • textinput (no longer spec'd but implemented by blink)
  • composition* (supported by blink on Android)

We should have test (and probably polyfill) coverage for these.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.