A.Bharathi Eye Ball movements. Mat lab software plays

   A.BharathiStudent, Department of Information TechnologySri Sairam Engineering CollegeWest Tambaram,[email protected]

PriyankaStudent, Department of Information TechnologySri Sairam Engineering CollegeWest Tambaram,[email protected]   K.SuvathyStudent, Department of Information TechnologySri Sairam Engineering CollegeWest Tambaram,[email protected]

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

com T.P.RaniAssociate Professor, Department of Information Technology,Sri Sairam Engineering CollegeWest Tambaram,[email protected]

inAbstract An IndividualHuman Computer Interface system using eye movement is  introduced. Traditionally, human computer interface uses mouse or keyboardas  an input  device. The proposed systempresents hands free interface between computer and user .The main objective isto control the Mouse & Keypad using Eye ball. It also verify the user’sauthentication using Face Recognition. For Face recognition, Violo JonesAlgorithm is used.

Camera is connected with the system & mat lab is usedfor User Authentication. After successful authentication, camera is continuedto scan User’s Eye ball movement. During this state of action, Our PhysicalKeypad & Mouse are freezed in order to stop user’s Key inputs.

On-screenKeyboard & Mouse control is initiated so as to control those through EyeBall movements. Mat lab software plays a vital role in controlling the on-screenKeyboard & Mouse. We will be using Java for freezing the physical keypad& mouse functionalities. Camera scans the Eye ball of the authenticateduser and control of the mouse is achieved through the Eye ball movement.Alphabets are selected by Eye ball clicking for effective communication. ThePhysical keyboard control is released by Control, Alt & Delete keys.

  Mat lab is used for Face recognition ball control. On-screen Keyboard & Mouse are initiated & freezingof Physical Mouse & Keypad is achieved by Java software. Keywords—On-screen keyboard, Face recognition and Eye detection,Web Camera, MAT lab, Computer, java,Viola JonesAlgorithm.  1.

      Introduction In today’s worldtechnology gets upgraded to the newest level, majority computers rely on mouseand keyboard as the major input devices which could not be used by handicappedpeople. The proposed system describes a new method for the handicapped peopleto communicate using computers with the help of eyes only. Most of the devicessuch as computers and laptop prefer touch screen technology, but still thepreferred technology is not cheap enough to be used on desktop systems. Themain aim is to develop an interactive virtual human computer interface.

In our system,we prefer the usage of Matlab to detect the web camera which is used for takingimages continuously to focus the eye pupil. With the help of various imageprocessing techniques, face recognition and eye tracking are done. For Face recognition,Viola Jones Algorithm is used.

 2.     ExistingSystem Now a days, people use computer by theirhands and  touch pad. Traditionally,human computer  interface uses mouseand  keyboard as an input device. An  idea to control computer  mouse cursor movement with human eyes isintroduced1. Blink actions are introduced  to replace the mouse clicks2. Generally foropening a file, one must click on the file by using physical mouse or touchpad. Instead a new system is introduced to replace the physical mouse.

One canopen a file using the eye movement and blink actions. Both  the left and right click is done by blinkingthe left eye and right eye.      3.     Proposed System The mainobjective is to control the Mouse & Keyboard using Eye ball for handicappedpeople. On-screen keyboard is displayed on the desktop to replace the physicalkeyboard. Camera is mounted on the top of the desktop and the user  image infront of  the computer is captured.

The image iscompared and verified with the database  foruser’s authentication.    After    successful authentication,On-screenkeyboard is displayed on the desktop automatically. Once the On-screen keyboardis displayed, the physical keyboard and mouse is freezed. The Physical keyboardand mouse freeze is released by pressing Control, Alt & Delete keys.

Camerais continued to scan for face recognition using Violo Jones Algorithm. Then theeye is detected successfully to control the mouse and on-screen keyboard. Basedon the blink action, the letter is typed and displayed foreffective communication. Humaneye structure The human eye isan important organ that senses light. The important parts of human eye relatedto eye tracking are described .

The transparent coat in front of eyeball iscornea. The muscle that controls the size of pupil is called iris, which islike the aperture in a camera to let light goes inside. The iris has colour andis different from person to person, thus can be used in biometrics. The toughouter surface of the eyeball is sclera and appears white in the eye image. Theboundary between the sclera and the iris is called limbus. The captured eyeimage by digital camera is shown in Figure below.      FIG 1: Structureof eye    4.     BlockDiagram   5.

     SystemWorking A specializedvideo camera (Logitech C170) is mounted above the desktop to observe the user’seyes who sits in front of the desktop. The camera captures the video image ofthe eye and determines where the user is looking on the screen. No attached isnecessary to the user’s head or body. To “select” any letter on the On-screenkeyboard , the user looks at the letter for a specified period of time and to “press”any letter, the user just needs to blink the eye.

In this system, nocalibration procedure is  required. Forthis system, input is only eye. External hardware is not attached or required.

     Thesystem starts with capturing an image of human face and detects an eye from theface and converts to gray level, remove noises, converts to binary image,calculate pixel value, detects sclera, divides eye and screen quadrants andfinally performs mouse functions such as mouse move, left click, right click, doubleclick, selection, drag & drop, and typing using on-screen  keyboard according the pixel value. If sum ofwhite pixel value is zero, mouse click operation is performed, and if sum ofwhite pixel value of both eyes is one or more, mouse movement is performed.              6.      Architecture Diagram   Gazedetermination            On-screenkeyboard                                                Face     Detection                                          Eye     Detection   Truth table formouse action                7.     Proposed Algorithm The detailed processing stepsis presented below: 1.      Logitech C170 camera is fixed on the top of thedesktop.2.

      The camera takes the image of the user who sitsin front of the computer.3.      The image is compared and verified with thedatabase for user’s authentication.4.      After successful user’s authentication,on-screen keyboard is displayed automatically on the desktop.5.      Once on-screen keyboard is displayed, thephysical keyboard and mouse are freezed by using java code.6.

      Freezing can be released by pressing control,alt and delete keys on same time.7.      Once on-screen keyboard is displayed, Camerastarts scanning the user eye by taking video.

8.      After receiving these streaming videos from thecamera, it then breaks into frames.9.

       After receiving frames, it checks for lightingconditions because the camera requires sufficient lights from external sourcesotherwise  an error message will displayon the screen. 10.    The captured frames that are already in RGB mode areconverted into Black ‘n’ White in order find the edge movement.11.

    Images (frames) from the input source focusing the eyeare analyzed for Iris detection (center of eye) using Viola Jones Algorithm. 12.    After that, a mid point is calculated by taking themean of left and right eye centre point.

13.    Then, the mouse will move from one point to another onthe screen and user will perform clicking action by blinking their eyes.14.   Based on the blink action, the letteris typed and displayed for effective communication.

          8.      Face Detection The camera isattached with the computer to capture images of the person using the system.From the captured image, human face is detected and cropped in order to detectthe eyes. Face detection has been researched with a different methods thatoften is motivated by a technique of face detector.

Such techniques can usecolors, textures, features and templates. The following two techniques aretried in this proposed system to select the best one. 1)       Skin Color Analysis Method Skin coloranalysis is often used as part of a face detection technique.

Varioustechniques and colorspaces can be used to divide pixels that belongs to skinfrom pixels that are likely to the background.This techniquefaces with a big problem as skin colors are usually different over differentpeople. Inaddition, insome cases skin colors may be similar tobackgroundcolors with some measures. For example,having a redfloor covering or a red wooden door inthe human image cancause to fail the system.2)      Viola JonesAlgorithm Method This methodperforms set of features at a number of scales and at different locations anduses them to identify whether an image is a face or not. A simple,yet competent,classifier is built by identifying a fewefficientfeatures out of the whole set of the Haar-likefeatures whichcan be generated using the AdaBoost 3 technique. To provide a real timeprocessing, anumber ofclassifiers that containing set of featuresare combinedtogether in a cascaded structure. According to Viola Jones algorithm 4, facedetection is performed using the facts that human eye is darker than uppercheeks and forehead as presented in Fig.

2 (c), and there is a brighter part inbetween the two eyes that separates the left eye from the right eye aspresented in Fig.2 (b). The features required by the detection frameworkgenerally performs the sum of image pixels in a rectangular area as presentedin Fig.2 (a). The features used by Viola and Jones algorithm are rely on morethan one rectangular areas. Fig.2 (a) presented the four types of features usedin viola Jones algorithm.

Fig.2 (b) presented features that looks similar asbridge of the nose. Fig.2 (c) presented feature looks similar to the eye isdarker than upper cheeks.    FIG2: Viola Jones algorithm features The value of agiven feature is the sum of the pixels of the unshaded rectangle subtractedfrom the sum of the pixels within shaded rectangle 3. These rectangularfilters are very fast at any scale and location to capture characteristics ofthe face.

As it is must, the collection of all likely features of the fourtypes which can be produced on an image window is probably big; applying all ofthem could be something intensive and could generate redundant activities. So,only a small subset of features from the large set of features are used. Theadvantage of Viola Jones algorithm is, its robustness with very high detectionrate and real time processing.   FIG3-Processing structure of the proposedmethod       9.

      Eye Detection Eye movementanalysis 5, can be used to analyzeperformance ofeye to cursor integration. Eye pair isdetected andcropped from the cropped face by eliminating other face parts such as mouth,nose andears. Theresultant image is divided into two parts:left eye andright eye. The left and right eye imagesare convertedfrom RGB to gray scale and then noiseis removed usingimage enhancement techniques(median filterand wiener filter).After this,image is converted into binary image(black and white) using threshold value.Theprocessingstructure of the proposed method is shownin Fig. 3. 1.

)   Gray scaleconversion Gray scale images can be the result of measuring theintensity of light at each pixel according to a particular weighted combinationof frequencies (or wavelengths). Gray scale conversion is done for edgedetection.  Fig4- Gray scale image of eye. 2.)    Image Enhancement Removing noisesand improving image quality is used for better accuracy on computer vision.Noisescould beGaussian noise, balanced noise and the impulse noise 6.

Impulse noisedistributed on theimage as lightand dark noise pixels and corrupts thecorrectinformation of the image. Therefore, reducingimpulse noisesare key important in computer vision.In this paper,two methods of image enhancement(median filterand wiener filter) are used. Those methods are used to remove noises. 3.)    Image Binarization Using Threshold Value In most ofvision systems, it is helpful to be separateout the parts ofthe image that is corresponding to which the image we are interested with, andthe partsof the imagethat corresponds to the background.Thresholdingusually gives an easy and suitable wayto carry outthis segmentation based on different graylevelintensities of an image.

 A single ormultiple threshold levels could decide for the image to be segmented; for asingle value threshold level every pixel in the image should compare with agiven threshold value and if the pixel of the image intensity value is higherthan the assigned threshold value, the pixel is represented with white in theoutput; on the contrary if the pixel of the image intensity value is less thanthe assigned threshold value, the pixel is represented with black. For the multiplethreshold level there are groups of intensities to be represented to whitewhile the image intensities that have out of these groups are represented toblack. Generally thresholding is useful for rapid evaluation on imagesegmentation due to its simplicity and fast processing speed 7. Imagebinarization is the process of converting a gray level into black and whiteimage by using some threshold value.  Fig5-Binary image after using single and multiplethresholding 4.

)   Eye and Mouse-Cursor Integration When both eyesare opened, the left eye is divided into four quadrants to integrate withmouse-cursormovement. Todivide the eye into four quadrants, center of the eye is a reference point. Eyecorner location is used to find widths and heights of an eyewhich are usedto calculate center of eye. Using x andy-coordinatesthat created at the corner of eye, centerof eye iscalculated8.

Fig.6 (a) presented eye quadrantslabeled with 1, 2, 3,and 4, and Fig.6 (b) presentedquadrants of computerscreen that islabeled with 1, 2, 3, and 4.             a)Eye quadrant                  b)Screen quadrantFig-6Eye and Screen quadrants.   10.

  ConclusionThis paper isfully focused on the development of hands-free PC control- Eyeball movement based cursor and keyboardcontrol. This system does not require any wearable attachment is the mostunique aspect. The mouse pointer is operated using the user eye movement. This systemhas been implemented in Windows 8.1 with 4GB RAM in Matlab environment usingMatlab Image Acquisition Processing Toolbox.

Image ProcessingToolbox is sufficient for the sensitivity of the system and to work in realtime as how    normal    mouse   perform     movement task.On-screenkeyboard is introduced to perform all keyboard  actions   for   effective  communication.Our motive is tocreate this technology in the  lowestcost possible way and also to create it under a standardized operating systemin more user friendly manner.11.  References1 Craig Hennessey,Jacob Fiset “Long Range Eye Tracking: Bringing Eye Tracking into the LivingRoom”, IEEE, 20162Shyam NarayanPatel, V. Prakash ” Autonomous Camera based Eye Controlled Wheelchair systemusing Raspberry-Pi”, IEEESponsored 2nd International Conference on Innovations in Information Embeddedand Communication Systems, 13 August 20153Viola–Jonesobject detection framework (n.

d.). InWikipedia.Retrieved February 22, 2016,      fromhttps://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework 4M.Mangaiyarkarasiand A.

Geetha, “Cursor Control System Using Facial Exoressions for Human-ComputerInteraction”, ISSN: 0976-1353 vol 8 Issue 1 –April 2014. 5Ziho Kang andSteven J. Landry, “An Eye Movement Analysis Algorithm for a MultielementTarget Tracking Task: Maximum Transition-Based Agglomerative HierarchicalClustering”, IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS,VOL. 45, NO. 1,FEBRUARY 2015. 6Youlian Zhu,Cheng Huang, “An Improved Median Filtering Algorithm for Image NoiseReduction”, Sciverse ScienceDirect, Elsevier, Physics Procedia 25(2012)609-616. 7Moe Win, A. R.

Bushroa, M. A. Hassan, N. M.

Hilman, AriIde-Ektessabi, “A ContrastAdjustment Thresholding Method for Surface Defect Detection Based onMesoscopy”,IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS,VOL. 11, NO.3, JUNE 2015 8MuhammadUsman Ghani, Sarah Chaudhry, Maryam Sohail and Muhammad Nafees Geelani,”GazePointer: A Real Time Mouse Pointer Control Implementation Based OnEye Gaze Tracking”, IEEE, 978-1-4799-3043-2/13, 2013.

 9Shrunkhala Satish Wankhede,Mrs.S.A.Chhabria,Dr.R.

V.Dharaska ” Controlling Mouse Cursor Using EyeMovement”, Special Issue for National Conference On Recent Advances inTechnology and Management for Integrated Growth 2013 (RATMIG 2013) 10Akhil Gupta, Akash Rathi, Dr. Y. Radhika, “HANDS-FREE PC CONTROL” CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT, InternationalJournal of Scientific and Research Publications, Volume 2, Issue 4, April 2012  11SidraNaveed, Bushra Sikander, and Malik Sikander Hayat Khiyal “Eye Tracking Systemwith Blink Detection”,IEEE,201212Jixu Chen, Yan Tong ,Wayne Grayy ,Qiang Jiz “A Robust 3D Eye Gaze TrackingSystem”,IEEE ,2011  13Ioana Bacivarov,Mircea Ionita, Peter Corcoran, “Statistical models of appearance for eyetracking and eye blink detection and measurement”, IEEE transactions onconsumer  electronics, Vol.54 , No.3, pg.

1312?1320 August 2009.14S.S.Deepikaand G.Murugesan, “A Novel Approach for Human Computer Interface Based onEye Movements for Disabled People”, IEEE, 978-1-4799-6085-9/15,2015. 15 Ryo Shimata,Yoshihiro Mitani and Tsumoru Ochiai, “A Study of Pupil Detection andTracking by Image Processing Techniques for a Human Eye–Computer Interaction System”,978-1-4799-8676-7/15/2015 IEEE , SNPD 2015,June 1-3 2015, Takamatsu,Japan.

 16M.Mangaiyarkarasiand A.Geetha, “Cursor Control System Using Facial Exoressions forHuman-Computer Interaction”, ISSN: 0976-1353 vol 8 Issue 1 –April 2014.