1. Introduction
Eye tracking, which is defined as the accurate measurement of a person's gaze orientation, is an important tool for understanding human visual attention, cognitive processes, and interaction input. Gaze is the external observation index of human visual attention (Reference Zhiwei and QiangZhiwei & Qiang, 2005). Eye-tracking technology mainly relies on primarily uses cameras and infrared (IR) illuminators to systems to work. These devices track a glint in the eye by capturing the eye's reflection of infrared light, using what is known as the Pupil Centre Corneal Reflection (PCCR) method (Reference Guestrin and EizenmanGuestrin & Eizenman, 2006). Eye-tracking devices can be divided into two broad categories: intrusive trackers and non-intrusive trackers. Intrusive trackers require users to wear special devices, such as reflective contact lenses or VR headsets. These trackers are highly accurate, maintain a fixed distance from the eye, and allow the user to move the head in a high range (Reference David, Gutiérrez, Vo, Coutrot, Silva and CalletDavid et al., 2023; Reference Namnakani, Sinrattanavong, Abdrabou, Bulling, Alt and KhamisNamnakani et al., 2023). For non-intrusive trackers, sensor unit is fixed at a distance and can be divided into short distance (50-70 cm), medium distance (100-200 cm), and long distance (more than 300 cm) depending on their operating distance (Reference Cho and KimCho & Kim, 2013). Such devices are suitable for large-scale applications because the user does not need to wear any device (Reference Namnakani, Sinrattanavong, Abdrabou, Bulling, Alt and KhamisNamnakani et al., 2023). One of the most popular eye tracking methods is screen-based eye-tracking systems. By tracking how users interact with visual information on the screen, researchers are able to optimize content, enhance the user experience, and enhance visual communication (Reference Byrne, Castner, Kastrati, Płomecka, Schaefer, Kasneci and BylinskiiByrne et al., 2023; Reference David, Gutiérrez, Vo, Coutrot, Silva and CalletDavid et al., 2023). Eye tracking applications can also extend beyond interface design to security domains such as deception detection (Reference Wu, Singh, Davis and SubrahmanianLi et al.; Wu et al., 2018) using physiological data (Reference Hypšová, Seitl, Popelka and DostálHypšová et al., 2024) and phishing attack vulnerability assessment (Reference HusseinHussein, 2023).

Figure 1. (a) Head yaw, roll and pitch movements significantly affect eye-trackers' accuracy in larger screens (Reference Rozali, Fadilah, Shariff, Zaini, Karim, Wahab and ShibghatullahRozali et al., 2022), (b) Gazepoint GP3 HD eye-tracker, and (c) LG 47-inch LED HDTV used for testing
Modern eye-tracking technology, despite its rapid advancement, still faces several technical bottlenecks. The primary challenge stems from measurement uncertainties caused by the distance between tracking devices and users' eyes. Lighting variations, head movements, and individual differences in eye anatomies all affect tracking performance (Reference David, Gutiérrez, Vo, Coutrot, Silva and CalletDavid et al., 2023; Reference Namnakani, Sinrattanavong, Abdrabou, Bulling, Alt and KhamisNamnakani et al., 2023). Current technology can only support head movements within a limited range, particularly struggling with tracking vertical and depth movements effectively. To address these challenges, researchers are actively developing machine learning-based algorithms to enhance system stability (Byrne, Reference Byrne2012; Byrne et al., Reference Byrne, Castner, Kastrati, Płomecka, Schaefer, Kasneci and Bylinskii2023; David et al., Reference David, Gutiérrez, Vo, Coutrot, Silva and Callet2023), while coordinate transformation techniques have proven effective in related tracking domains (Reference Jiang, Liu, Jiang, Zheng, Jin and XuJiang et al., 2024). State-of-the-art technology combines depth cameras and pan-tilt-zoom sensors to expand tracking range and improve head and eye movement capture capabilities (Cho & Kim, Reference Cho and Kim2013; Cullipher et al., Reference Cullipher, Hansen and VandenPlas2018; Schwind et al., Reference Schwind, Pohl, Bader, In, Martin, Sarah and Niels2015).
This research proposes a novel eye-tracking solution specifically designed for large-screen applications. Eye tracking on large screens is critical to understanding how people interact with complex setups such as air traffic control interfaces (Reference Wang, Wang, Lin, Cong, Xue and OchiengWang et al., 2021). It provides valuable insights to optimize layouts, improve usability, and ensure ergonomic designs. Existing, cost-effective eye-tracking products are typically limited to displays of 27 inches, beyond which trackers lose accuracy or cannot detect users’ eyes (Tobii, n.d.). Human vision is divided into 3 zones, focus, field of vision, and field of peripheral vision. The field of vision is the portion of the visual environment that is visible to us in a 3-dimensional view. The horizontal field of view is approximately 60° (Reference Rutkowski and MayRutkowski & May, 2017). With screens and trackers operating at 24–25 inches away from the human eye, a 27-inch screen is the maximum size that can be accommodated in this horizontal field of view. To address this limitation in the current products, we have designed an adjustable guide rail with broader coverage by allowing the tracking unit to move along a track. This research focuses on the key design elements of this rail system.
While current eye-trackers in the market utilize fixed designs, and previous research has employed pan-tilt-zoom cameras to enhance sensor unit adaptability (Reference Cho and KimCho & Kim, 2013), our approach further explores the possibilities of a moving camera. This breakthrough design not only overcomes the limitations of static cameras but also contributes to the development of more versatile eye-tracking solutions. Through this innovation, we can support larger display screens while providing users with a greater scope of head movement (Reference Byrne, Castner, Kastrati, Płomecka, Schaefer, Kasneci and BylinskiiByrne et al., 2023; Reference David, Gutiérrez, Vo, Coutrot, Silva and CalletDavid et al., 2023).
This paper outlines a classical 5-stage design thinking approach.
Empathize – Understanding the User and the Problem Space: This stage focuses on gaining a deep understanding of eye tracking and its applications. We spent multiple weeks reading about and testing various screen-based eye-tracking technologies, identifying their limitations. Our findings revealed that static trackers struggle with accuracy on large screens due to restricted tracking range. While many commercial trackers claim high accuracy, real-world testing showed significant gaps, especially in large display setups. This phase also made it clear that industries like gaming, research, and assistive technology have different tracking needs, reinforcing the need for a flexible solution. As designers and engineers, we often default to conventional, one-size-fits-all solutions when confronted with challenges. This approach, though efficient in some cases, can overlook the unique needs and complexities of specific problems. The 5 stages of Design Thinking provide a structured framework that encourages deeper exploration. By empathizing with users, we can uncover insights that standard solutions are missing, allowing for more targeted and effective interventions. This process emphasizes the importance of understanding the problem in its entirety, leading to innovative solutions that are both user-centered and contextually relevant.
Define – Clearly Identifying the Main Problem: This step involves articulating the specific challenge faced in large-screen, non-intrusive eye tracking. The Introduction and Related Work section defines the problem, explaining why existing trackers fail and why a solution is necessary. We analyze prior research to establish the need for a dynamic tracking approach. A major takeaway was that most previous studies focused on small screens and static setups, meaning there was not much guidance on designing for large screen with movement. Clearly defining the problem early on, it also helped avoid unnecessary design detours, as some of our initial ideas did not fully address the main tracking issue.
Ideate – Exploring Possible Solutions: The ideation phase focuses on brainstorming and testing concepts to address the problem. The Methodology section up to sub-section 3.4 details our experiments with the eye tracker’s real-world range and movement feasibility. We explored whether a moving tracker could improve accuracy and tested various implementation methods. One of the biggest lessons here was how useful quick, low-cost testing can be—before even building a prototype, simple setups with manual movement helped validate the idea. We also realized that some potential solutions looked great in theory but did not hold up in real-world testing, reinforcing the need for an iterative approach.
Prototype – Developing a Working Model: This step involves creating a functional prototype based on the chosen concept. Sub-sections 3.5 and 3.6 describe our process of designing and manufacturing a 3D-printed PLA guide rail system to enable controlled tracker movement. The updated protocol also includes a motor control mechanism, potentially illustrated in Figure 4. A key challenge here was balancing stability and smooth movement—early versions either introduced too much vibration or restricted flexibility. We went through multiple iterations, adjusting materials, positioning, and alignment to refine the system and get it working reliably.
Test – Evaluating Performance and Validating Accuracy: The testing phase assesses whether the prototype effectively improves eye-tracking accuracy. Our calibration test confirmed that a moving tracker enhances coverage and accuracy compared to a static system. However, additional tests are needed to evaluate its applicability in research and industrial settings. One lesson from this stage was that testing should be planned alongside prototyping, since we initially underestimated the complexity of calibrating a moving tracker. We also realized that factors like ease of setup, recalibration speed, and software integration are just as important as raw accuracy when designing something for real-world use.
2. Related work
Eye-tracking research has evolved significantly, encompassing various technological approaches and applications. This section reviews key developments in low-cost solutions, applications across different fields, and challenges specific to screen-based eye tracking. Recent years have seen progress in developing affordable eye tracking alternatives. Fuhl et al., (Reference Fuhl, Tonsen, Bulling and Kasneci2016) conducted comprehensive evaluations of low-cost systems, developing an algorithm that demonstrated superior performance compared to existing methods, while noting the inevitable accuracy trade-offs compared to premium devices. A significant breakthrough came from Papoutsaki et al. (Reference Papoutsaki, Sangkloy, Laskey, Daskalova, Huang and Hays2016), who developed WebGazer, a webcam-based solution enhanced through user interaction. Building on this work, Steil et al. (Reference Siegle2019) demonstrated that cost-effective solutions could maintain acceptable data quality while significantly improving accessibility.
The fundamental architecture of screen-based eye tracking involves specialized hardware and software systems designed to monitor user gaze during screen interaction (Reference Poole and BallJacob & Karn, 2003; Poole & Ball, 2006). Most modern systems employ the Pupil Center Corneal Reflection (PCCR) technique, which combines IR illumination and camera capture to calculate precise gaze points (Reference Holmqvist, Nyström, Andersson, Dewhurst, Jarodzka and van de WeijerHolmqvist et al., 2011). The accuracy of these systems heavily depends on proper calibration procedures throughout tracking sessions (Reference Papoutsaki, Sangkloy, Laskey, Daskalova, Huang and HaysPapoutsaki et al., 2016).
The increasing prevalence of large displays has introduced new challenges to eye-tracking technology (Reference Lee, Lee, Cho, Gwon, Park, Lee and ChaLee et al., 2013). Current tracking systems face significant limitations, typically supporting screens only up to 27 inches, beyond which they encounter issues such as dead zones or compromised tracking accuracy (Using an eye-tracker on a larger screen than recommended., 2016). Head movement poses a particular challenge, affecting both calibration stability and tracking precision (Reference Zhiwei and QiangZhiwei & Qiang, 2005). Researchers have proposed various solutions to address these limitations, including dynamic head mapping algorithms (Reference Zhu and JiZhu & Ji, 2007), multi-camera configurations (Reference Sheng-Wen and JinSheng-Wen & Jin, 2004), and innovative Pan-Tilt-Zoom camera systems (Reference Cho, Yap, Lee, Lee and KimCho et al., 2012). The persistent challenges remain, particularly in large-screen applications. Therefore, our proposed work builds upon these foundations while addressing the specific challenges of large-screen eye tracking through innovative mechanical solutions. Most existing large-screen eye-tracking systems that use pan-tilt cameras attempt to compensate for the limitations of static trackers by dynamically adjusting the sensor’s position. These setups rely on motorized PTZ cameras that physically move to keep the user’s eyes within the tracking range as they shift their gaze. Cho & Kim (Reference Cho and Kim2013) proposed a long-range gaze tracking system using a PTZ camera, allowing continuous eye tracking even with significant head movements. Ohno & Mukawa (2004) explored a stereo-camera setup to triangulate gaze position, providing a wider tracking range. However, PTZ-based setups still struggle to compensate for large head movements, particularly in close-range interactions. To maintain tracking accuracy, these systems require the user to sit farther away from the screen, often at a distance that is impractical for real-world applications where users need to interact closely with on-screen content. This limitation makes PTZ cameras less effective for setups where natural head movements and close-range screen interaction are essential.
3. Designing an eye-tracking device
This section describes the design and implementation of our proposed dynamic eye-tracking system for large screens. We first analyze current market limitations, then propose our solution's components and testing methodology in detail. The ideation phase focused on figuring out how to expand the tracker's effective range without losing accuracy. Early tests involved manually repositioning the tracker to different points along the screen to see if localized calibration improved tracking. This led to the idea of a moving platform that could dynamically shift the tracker’s position to match different sections of the screen. Various movement mechanisms were considered, including motorized rails, articulated arms, and fixed-position multi-tracker setups, but each had trade-offs in complexity, cost, and usability. The final concept prioritized a simple, mechanically controlled guide rail that allowed precise repositioning without the need for excessive recalibration or expensive additional hardware.
3.1. System requirements analysis
Given the challenge of a limited viewing window, it became evident that covering the entire screen would either require multiple trackers or a single tracker capable of moving. Most commercial eye-trackers are static and optimized for 24–27-inch screens, with recommended viewing distances of 24-25 inches (Reference David, Gutiérrez, Vo, Coutrot, Silva and CalletDavid et al., 2023). When used with larger displays, these devices face significant challenges in maintaining accuracy, particularly due to increased head movements required to view the entire screen (Reference Namnakani, Sinrattanavong, Abdrabou, Bulling, Alt and KhamisNamnakani et al., 2023). Our preliminary testing revealed that accuracy decreased significantly on-screen edges due to head yaw, roll and pitch movements affecting eye visibility (see Figure 1a).
3.2. Hardware components
For our design prototype, we selected the Gazepoint GP3 HD eye-tracker (shown in Figure 1b) due to its compact size (125g), high accuracy (0.5-1 degree), and robust performance (Gazepoint Control User Manual Rev 2.0., 2019). Testing was conducted on an LG 47-inch LED HDTV (model 47LB5900, shown in Figure 1c), providing sufficient screen area beyond the tracker's optimal 24-inch window.
3.3. Screen setup
The screen used for the experiment is an LG 47-inch Class LED HDTV. Placing the tracker below the center of the screen and conducting multiple calibration cycles showed that the GP3 HD failed to calibrate most iterations. For two iterations, with the head still and eyes strained toward screen corners, tracking worked only within the 24-inch area above the device. Beyond this window, accuracy decreased significantly on the right and left sides of the screen, as head-yaw movements caused one eye to become less visible. Accuracy also diminished towards the top of the screen due to head pitch movements limiting eye visibility. Our software solution utilizes virtual display technology through Amyuni Technologies' secondary desktop package and OBS Studio for precise calibration (Reference SiegleSiegle, 2024). This setup enables the projection of calibrated 24-inch windows onto specific screen segments, which can be individually calibrated, as illustrated in Figure 2. This allows us to check the tracker calibration for different sections of the screen.

Figure 2. Window projection on a large screen can be adjusted using Gazepoint Control: (a) The native Gazepoint control software, (b) Open Broadcaster Software (OBS Studio), and (c) Small screen projection on a large screen
3.4. Design evolution and testing
Initial testing of our guide rail system showed that parallel horizontal movement alone was insufficient for accurate eye tracking when testing coverage for the lower half of the screen (Figure 3a). The tracker failed to capture proper eye contact when the tracker moved solely in parallel to the screen. We then implemented the tracker oriented to face the user (Figure 3b) and designed to maintain a constant distance from the user's eyes (Reference Brand, Diamond, Thomas and Gilbert-DiamondBrand et al., 2021). The optimal configuration was achieved with the tracker positioned at the height of 9 cm and inclining it at a pitch of 32 degrees as it is allowed to face the user in the test environment perfectly. This circular path, with the user's eyes as the centre of rotation, enabled tracking across the entire lower half of the 47-inch screen. A height of 9 inches with a pitch inclination of 22 degrees would accurately track the upper half of the screen. The circular path required for the tracker movement is illustrated as the tracker path (Figure 3c). The tracker path is determined by the human field of view and iterating tracker distance from the screen for the given dimensions in the test environment.

Figure 3. Design evolution: (a) Tracker position testing, (b) Tracker inclined to face the user, and (c) Tracker position and dynamic path based on field of view
3.5. Guide rail system design
The final design incorporates several innovative elements: (1) T section Base: It follows the curved trajectory as seen in Figure 4a, labelled in the tracker path. (2) Compliant Mechanism: Utilizes flexible beam-groove mechanisms (Reference Ma and ChenMa & Chen, 2015) for vertical adjustment. The compliant snap-fit mechanism holds pieces in place. (3) Torque Hinge Integration: Provides precise pitch angle control and supports the weight of the tracker for maintaining eye contact.

Figure 4. CAD model showing: (a) Guide rail base, Sleeve with compliant mechanisms and compliant snap-fit buckle, (b) Horizontal Rail and Final assembly
3.6. Manufacturing implementation
The prototype was manufactured using PLA 3D printing, chosen for its ability to produce complex geometries and appropriate mechanical properties such as tensile strength (Reference MayyasMayyas, 2022). It was carefully considered to optimize the performance of compliant mechanisms and prevent layer de-lamination (Reference Nugroho, Ardiansyah, Isna and LarasatiNugroho et al., 2018; Reference van der Borg, Warner, Ioannidis, van den Bogaart and Roosvan der Borg et al., 2023). The integrated system successfully tracks gaze across the entire 47-inch display by combining mechanical movement with precise calibration capabilities. Figure 5 shows the completed prototype of the tracking assembly.

Figure 5. Final prototype demonstration showing dynamic tracking capabilities
4. Analysis and findings
The objective of developing this device is to assess tracking across all screen regions accurately. We developed a comprehensive testing approach to quantify tracking accuracy across different screen sections.
4.1. Validation methodology
While the Gazepoint software provides native calibration capabilities, it lacks quantitative accuracy metrics. Therefore, we developed a secondary validation test using virtual display technology (Reference SiegleSiegle, 2024) to divide the 47-inch screen into six 24-inch sections (as shown in Figure 6a). For each section, users were positioned 25-26 inches from the screen and did a 5-point calibration process.
4.2. Testing protocol
The designed validation test required users to fixate on a red cross following a 5-point pattern (Figure 6b), maintaining focus for 10 seconds at each position. We defined Areas of Interest (AOIs) around each fixation point (Figure 6c): an inner AOI for precise targeting and an outer AOI for acceptable accuracy (Reference Punde, Jadhav and ManzaPunde et al., 2017). For testing purposes, only co-authors participated in this testing protocol to evaluate the feasibility of the proposed system, an iterative testing process was followed to validate accuracy. The test scenario was designed to resemble the calibration phase of the eye-tracker.

Figure 6. (a) Division of 47-inch screen into six 24-inch calibration sections is used for the validation methodology, (b) The testing protocol has five-point validation test patterns for each pattern two inner, and (c) Outer AOIs definition
4.3. Accuracy analysis
Accuracy was measured using two primary metrics: Fixation Precision Ratio (Number of fixations in inner AOI versus total fixations) and Time Precision Ratio (Viewing duration in inner AOI versus total viewing time). Results across all six screen sections demonstrated consistently high accuracy. The comprehensive results (Table 1) indicate that our dynamic tracking system maintains high accuracy across the entire 47-inch screen, with most measurements achieving precision ratios above 0.9. The results suggest that the system provides reliable gaze-tracking data across the entire screen area, making it suitable for large-screen applications.
Table 1. Accuracy of the left section, the centre section and the right section for top and bottom test results

5. Conclusions and future work
In this work, our primary focus was on assessing the feasibility and viability of our approach, specifically by testing the calibration phase. Since a failure at this stage would render further research impractical, we prioritized this aspect. However, we recognize the importance of broader testing and plan to design a study incorporating behavioural research questions to explore our design further.
Following the 5-stage design thinking process, we saw firsthand that design thinking is not strictly linear period, we had to review earlier stages multiple times based on new findings. One of the biggest takeaways was the importance of validating core assumptions early instead of waiting until the prototyping stage to figure out feasibility. This project also reinforced the need for a user-centered approach, since considering different industries helped refine our final design decisions. For anyone following a similar process, iterating quickly and testing often will help avoid wasted effort and lead to a more effective final solution.
We designed a prototype of an adjustable platform for the Gazepoint GP3 HD tracker that demonstrates the potential for large-screen applications while highlighting several areas for future development. Our work addresses the need for accurate and non-intrusive eye tracking on large screens, a challenge current static systems fail to effectively overcome. Focusing on a user-centered design ideology, the solution addresses adaptability and user comfort in diverse scenarios. Through iterative testing, the prototype demonstrated accurate performance, validating the concept of dynamic gaze tracking. Our approach not only resolves existing limitations but also aligns with the design principles of scalability and modularity, ensuring that the system can evolve with future technological advancements. The result contributes to areas such as education, accessibility, and gaming, offering a robust foundation for more interactive and inclusive user experiences.
This study focused on validating the mechanical feasibility of the guide rail design. While motorization was considered for future iterations to enable dynamic adjustments, compliant structures were used for position holding to simplify initial testing. By proving the guide rail’s effectiveness, future developments can integrate motorization, building on this foundation to optimize performance and enable precise position tracking. Future iterations could incorporate automation through stepper motors for precise motion control (Reference Bendjedia, Amirat, Walther and BerthonBendjedia et al., 2012), alongside a central wide-angle camera for face detection and tracker positioning, building on existing pan-tilt-zoom camera approaches (Reference Cho and KimCho & Kim, 2013). The persistent challenge of head movement-induced accuracy degradation in static eye trackers (Hermens, 2015) could be addressed through dynamic head mapping techniques (Reference Zhu and JiZhu & Ji, 2007) or recent innovations in Kappa-angle-based automatic calibration systems (Reference Liu, Deng, Xu, Li, Zheng, Zhang and LiuLiu et al., 2023).
While alternatives like glasses-based tracking (Reference Niehorster, Hessels and BenjaminsNiehorster et al., 2020) and VR headsets (Reference Clay, König and KoenigClay et al., 2019) can address head movement issues with large screens, and VR environments offer controlled research conditions (Reference Blascovich, Loomis, Beall, Swinth, Hoyt and BailensonBlascovich et al., 2002; Li et al., Reference Li, Lee, Feng, Trappey and Gilani2021), screen-based tracking remains essential in specific industries. New methods also have been developed to incorporate VR headsets in collecting eye movements and transferring 2D experiments to 3D to collect more accurate data and skip the large screen and multi-screen environment problem (Reference Bagherzadeh and TehranchiBagherzadeh & Tehranchi, 2024a, Reference Bagherzadeh and Tehranchi2024b) but their applications need to be investigated.
This research provides a foundation for future developments in dynamic eye-tracking systems, particularly for applications where traditional static trackers prove insufficient. The findings suggest that despite emerging alternatives, screen-based systems maintain their importance in specific industrial applications, validating further development of dynamic tracking solutions. Sectors such as air traffic control and mining rely on traditional display interfaces due to safety requirements and existing infrastructure equipment, showing the continued relevance of our dynamic tracking solution. This solution, because it is based on 3D printing, is easy to manufacture and incorporates 3D printed parts making it easily reproducible and open for customization.