Sponsored Links

Jumat, 13 Juli 2018

Sponsored Links

Robust finger tracking and finger click detection using Kinect V2 ...
src: i.ytimg.com

In the field of image processing and technology, finger tracking is a high resolution technique used to determine the position of a user's fingers in sequence and hence represents an object in 3D. In addition, the finger tracking technique is used as a computer tool, acting as an external device on our computer, similar to a keyboard and mouse.


Video Finger tracking



Introduction

The finger-tracking system is focused on user-data interactions, where users interact with virtual data, by handling through the volumetric radius of 3D objects we want to represent. This system was born based on the problem of human-computer interaction. The goal is to enable communication between them and the use of gestures and hand movements become more intuitive, a finger tracking system has been created. The system tracks real time positions in 3D and 2D from the orientation of the radius of each marker and uses hand gestures and intuitive hand gestures to interact.

Maps Finger tracking



Tracking type

There are many options for implementing finger tracking. A large number of theses have been conducted in this field to make global partition a goal. We can divide this technique into finger and interface tracking. Regarding the latter, calculate the estimated sequence of images that detects the hand part of the background. Regarding the former, to perform this tracking, we need an external intermediate device, which is used as a tool for executing different instructions.

Tracking with interface

In this system we use an inertial and optical motion capture system.

Inertial motion capture cap

The inertial motion capture system can capture finger movements reading the rotation of each finger segment in 3D space. Applying this rotation to the kinematic chain, the entire human hand can be tracked in real time, without occlusion and wireless.

Inertial hand movement capture systems, such as Synertial mocap sheaths, use a small IMU-based sensor, located in each finger segment. For the most accurate retrieval, at least 16 sensors should be used. There is also a model of mocap sheath with fewer sensors (13/7 sensors) where the rest of the segment of the finger is interpolated (proximal segment) or extrapolated (distal segment). Sensors are usually incorporated into textile gloves that make the use of sensors more comfortable.

Because the inertial sensor captures movement in all 3 directions, flexion, extension and abduction can be captured for all fingers and thumbs.

Handset

Since the inertial sensor only tracks the rotation, the rotation should be applied to some hand frames to get the proper output. To get precise results (for example to be able to touch the fingertips), the hand frame must be properly adjusted for the actual hand. For this purpose, manual hand measurements or automatic measurement extraction can be used.

Unify data with an optical motion capture system

As described below, due to marker occlusion during capture, finger tracking is the most challenging part for optical motion capture systems (such as Vicon, Optitracks, ART,..). Users of optical mocap systems claim that most post-work processes are usually caused by finger catching. As an inertial mocap system (if calibrated correctly) largely without the need for post-process, a common use for high end mocap users is to integrate data from an inertial mocap system (finger) with an optical mocap system (body position in space).
The merging process of mocap data is based on the time-code matching of each frame for inertial and optical mocap system data sources. In this way any 3rd party software (eg MotionBuilder, Blender) can apply motion from two sources, regardless of the mocap method used.

Hand position tracking

At the top of finger tracking, many users require position tracking for the entire hand in space. Some methods can be used for this purpose:

  • Capture the whole body using an inertial mocap system (a hand framework mounted at the end of a kinematic body frame chain). The palm position is determined from the body.
  • Capture the position of the palm of the hand (forearm) using an optical mocap system.
  • Capture the position of the palm of the hand (forearm) using another position tracking method, widely used in VR headsets (eg HTC Vive Lighthouse).
Lack of an inertial motion capture system

Inertial sensors have two major disadvantages associated with finger tracing: - Problems to capture the absolute position of the hand in space (already discussed above). - Problems with magnetic interference - metal materials used to interfere with sensors. This problem may be seen primarily because the hands often contact with different things, often made of metal. The current motion capture gloves are able to withstand tremendous magnetic interference. Mind, magnetic immunity depends on several factors - producers, price range and number of sensors used in mocap gloves.

Optical motion capture system

tracking the location of markers and patterns in 3D is done, the system identifies them and marks each marker according to the user's finger position. The coordinates in 3D from this bookmark label are produced in real time with other applications.

Marker

Some optical systems, such as Vicon or ART, can capture hand movements through markers. In each hand we have a marker for every "operative" finger. Three high resolution cameras are responsible for capturing each marker and measuring its position. This will only be generated when the camera can see it. Visual markers, commonly known as rings or bracelets, are used to recognize user movement in 3D. In addition, as indicated by the classification, this ring functions as an interface in 2D.

Occlusion as an interaction method

Visual exclusion is a very intuitive method to provide a more realistic view of virtual information in three dimensions. The interface provides a more natural 3D interaction technique on base 6.

Bookmark function

Markers operate through interaction points, which are usually fixed and we have knowledge of the area. Therefore, it is not necessary to follow every marker at all times; multipointer can be treated in the same way when there is only one pointer operation. To detect these pointers through interaction, we enable ultrasound infrared sensors. The fact that many pointers can be handled as one, the problem will be solved. In cases when we are exposed to operate under difficult conditions such as bad lighting, blurred motion, malformations of markers or occlusion. This system allows to follow objects, even if some markers are not visible. Because the spatial relationships of all markers are known, the position of the invisible markers can be calculated using known markers. There are several methods for marker detection such as border marker and predicted marker method.

  • Homer techniques include the selection of rays with direct handling: An object is selected and then its position and orientation are handled as if connected directly to the hand.
  • Conner's technique presents a series of 3D widgets that allow indirect interaction with virtual objects through virtual widgets that act as intermediaries.
Articulation hand tracking

This is an interesting technique from a simpler and cheaper point of view, because it requires only one camera. This simplicity acts with less precision than previous techniques. It provides a new foundation for new interactions in modeling, animation controls and added realism. It uses gloves consisting of a set of colors assigned according to the position of the radius. This color test is limited to the computer vision system and based on the capture and color positioning functions, the hand position is known.

Tracking without interface

In terms of visual perception, feet and hands can be modeled as an articulated mechanism, a rigid body system connected between them with articulation with one or more degrees of freedom. This model can be applied on a smaller scale to describe hand gestures and on a broad scale to describe complete body movement. The movement of a particular finger, for example, can be recognized from its usual angles and it does not depend on the position of the hand in relation to the camera.

Many tracking systems are based on models focused on sequence estimation issues, where image sequences are given and model changes, we estimate 3D configurations for each photo. All hand configurations may be represented by a vector in the state space, which encodes the hand position and angle of the finger joint. Each hand configuration produces a series of images through finger joints border detection. Estimates of each image are calculated by searching for a state vector that is more suited to the characteristics measured. Finger joints have 21 additional countries over the body movements of a rigid palm; this means the cost calculation of the estimate increases. This technique consists of labeling each connection of radius junction modeled as a cylinder. We do the axis in each joint and the line for this axis is the projection of the connection. Then we use 3 DOF, because there are only 3 degrees of movement.

In this case, it is the same as the previous typology because there are various theses of dissemination about this. Therefore, the steps and techniques of treatment differ depending on the purpose and needs of the person who will use the technique. However, we can say that in a very general way and in most systems, you should perform the following steps:

  • Reduction background: the idea is to twist all the images captured with a 5x5 Gauss filter, and then this is scaled to reduce noisy pixel data.
  • Segmentation: the binary mask app is used to represent with white, hand-owned pixels and to apply black to the foreground skin image.
  • Regional extraction: left and right hand detection based on comparison between them.
  • Characteristic extraction: the location of the fingertip and to detect whether it is a peak or a valley. To classify a point, peak or valley, this is converted into a 3D vector, usually given the pseudo-vector name in the xy plane, and then calculating the cross product. If the z-component marks of the cross product are positive, we assume that the point is the peak, and in the case that the result of the cross product is negative, it will be the valley.
  • Recognition of point and pinch movements: taking into account visible reference points (fingertips), certain movements are attributed.
  • Pose estimation: a procedure that consists of hand position identification through the use of algorithms that calculate the distance between positions.

Other tracking techniques

It is also possible to actively track finger. Smart Laser Scanner is an unmarked finger tracking system using a modified laser scanner/projector developed at the University of Tokyo in 2003-2004. It is capable of obtaining real-time three-dimensional coordinates without the need for image processing at all (essentially, it is a rangefinder scanner that instead of continuous scanning of the full field of view, limiting the scanning area to a very narrow window precisely the target size). The introduction of Gesture has been shown with this system. The sampling rate can be very high (500 Hz), allowing a smooth trajectory to be obtained without the need for filtering (such as Kalman).

2020TECH: Leap Motion Has Raise About $50 Million For Its Finger ...
src: cdn.uploadvr.com


Apps

Obviously, finger tracking systems are used to represent virtual reality. However the application has gone into 3D modeling of professional, corporate and project level directly in this case upside down. So such a system is rarely used in consumer applications because of its high price and complexity. However, the main purpose is to facilitate the task of executing commands to a computer through natural language or interaction movements.

The purpose of focusing on the following computer ideas should be easier in terms of use if it is possible to operate through natural language or interaction movements. The main application of this technique is to highlight 3D design and animation, where software like Maya and 3D StudioMax use these tools. The reason is to allow more accurate and simple control of the instructions we want to run. This technology offers many possibilities, where sculpture, building and 3D modeling in real time through the use of computers is of the utmost importance.

Oculus Rift DK2 with Leap Motion Hand/Finger Tracking - YouTube
src: i.ytimg.com


References

  • Anderson, D., Yedidia, J., Frankel, J., Marks, J., Agarwala, A., Beardsley, P., Hodgins, J., Leigh, D., Ryall, K. , & amp; Sullivan, E. (2000). The real interaction of graphical interpretation: a new approach to 3D modeling. SIGGRAPH. p.Ã, 393-402.
  • Angelidis, A., Cani, M.-P., Wyvill, G., & amp; King, S. (2004). Swirling-Sweepers: Constant volume modeling. Pacific Graph. p.Ã, 10-15.
  • Grossman, T., Wigdor, D., & amp; Balakrishnan, R. (2004). Multi gesture gesture with volumetric 3D view. UIST. p.Ã, 61-70.
  • Freeman, W. & amp; Weissman, C. (1995). Control the television with hand gestures. International Workshop on Face Recognition and Automatic Gesture. p.Ã, 179-183.
  • Ringel, M., Berg, H., Jin, Y., & amp; Winograd, T. (2001). Barehands: free interoperability with wall-mounted view. CHI Extended Abstracts. p.Ã, 367-368.
  • Cao, X. & amp; Balakrishnan, R. (2003). VisionWand: an interaction technique for large displays using passive sticks that are tracked in 3D. UIST. p.Ã, 173-182.
  • A. Cassinelli, S. Perrin and M. Ishikawa, Smart Laser-Scanner for 3D-Machine Human Interface, ACM SIGCHI 2005 (CHI '05) International Conference on Human Factors in Computing Systems, Portland, OR, USA 2- 07 April 2005, pp.Ã, 1138 - 1139 (2005).

Latest Dexmo Input Glove Features Positional Tracking with Full ...
src: roadtovrlive-5ea0.kxcdn.com


External links

  • http://www.synertial.com/
  • http://www.vicon.com/
  • http://www.dgp.toronto.edu/~ravin/videos/graphite2006_proxy.mov
  • http://actuality-medical.com/Home.html
  • http://www.dgp.toronto.edu/
  • http://www.k2.t.u-tokyo.ac.jp/perception/SmartLaserTracking/
  • Finger tracking uses markers or without markers
  • 3D Hand Tracking

Source of the article : Wikipedia

Comments
0 Comments