A History of Touch Screens

What is a touch screen?

A touch screen is an electronic display that is also an input device, detecting contact within the display area and then using the location of that contact as an individual input. Multiple types of touch screens have been developed over the years, which have provided more accurate touch sensitivity as well as introduced multitouch and gestures. There are five types of touchscreen: resistive and capacitive touch screens make up the majority of the market, with infrared, optical and surface acoustic wave technologies being used in more niche applications.

The 1960s

The touch screen was first theorised in 1965, by Eric Arthur Johnson an engineer at the Royal Radar Establishment at Malvern, Worcestershire in England. As an air traffic control engineer, he concluded that a touch-based interface would improve the response capacity of workers, allowing for more precision marking points and increasing the rate of reporting any potential threats. His version depended on the use of touch by a human finger, as the input was detected by a change in capacitance, as the human body is conductive because it contains water, which would change the circuit accordingly. After writing a second paper in 1967, in which he detailed further the potential of the touchscreen, such as its use as an alternative input to a keyboard, as well as including diagrams and photos of the technology in action. He was granted patents for the capacitive touch screen in 1969 from the US patent office, although it would take until the 1990s to see it implemented for air traffic control.

The 1970s

In 1971, a group at the University of Illinois worked on the idea of an optical touch screen, meaning that it was sensitive to pressure and could be operated by objects as well as touch. This version of the touch screen uses LEDs and phototransistors to detect when and where an object breaks the path of light between them. This was implemented as part of the PLATO IV Touch Screen Terminal, allowing students to answer questions by touching the screen, as part of their generalised computer-assisted instruction system.

George Samuel Hurst then began developing his version, the resistive touch screen, at the University of Kentucky and then Oak Ridge National Laboratory. His touch screen is formed of two conductive sheets with a tiny gap between them that when pressed together change the resistance of the circuit, which can be used to determine the coordinates of the contact as an input, known as the Elograph. This could also be operated by touch and objects but was more compact, and so was preferred over the optical alternative. The first iteration was not transparent, as it was used to record results for an experiment with a Van de Graff generator as it accumulated and released an electric charge, but after refinement with his team the Elographics they were able to patent a transparent resistive touch screen in 1974.

The 1980s

In 1982, Nimish Mehta at the University of Toronto worked on a touch tablet device that used a camera behind frosted glass to detect the shadows and dark spots created on the screen when it was touched. As it could see when there were multiple shadows, this became the first example of multi-touch, although it could not detect what was touching it. This was further developed by Myron Krueger, an American Computer artist, who refined it into an optical system to track hand movements. This was used in his artificial reality laboratory project VIDEOPLACE, which allowed users in different rooms to interact with each other in a virtual space. The introduction on the idea of multi-touch inspired Bob Boie at Bell Labs to begin his experiments on combining capacitive touch screens with CTR displays – the popular computer monitors at the time. In 1984 he developed a transparent touch overlay capable of detecting multiple contact points to be able to change images with more than one hand.

At this point, the idea of touch screens began to expand into other areas of technology. In 1982, the Vectrex gaming console was released, and with it the light pen controller. This allowed for players to draw and interact with objects on the screen without a mouse or other controller. This was achieved by detecting the light emitted by the pen and then tracking the change in brightness as the user moved the pen, similarly to the optical touch screen from the University of Illinois in 1971.

The first commercially available use of touch screens was in 1983 by Hewlett-Packard, HP, in the HP-150. This was a compact personal computer, where the computer and screen were in the same base unit, and relied on infrared emitters and detectors around the screen which could detect any non-transparent objects touching the screen. There were issues with this method of touch detection as the detectors were placed in very small holes which easily filled up with dust, which would render the touch screen unusable until the dust was vacuumed out, and it was very easy to block more than one detector which would reduce the accuracy of the touch detection. 

Resistive touch became the more popular option as it was much cheaper to produce and run, in terms of materials, complexity and power requirements. They also allow for a much larger touch area and are not affected by water so are suitable for underwater and all-weather applications. Although this technology was designed as a screen, many products first featured the technology as touchpads; separate peripherals like keyboards as opposed to an integrated solution like the touch screens we know today.

In Japan in 1985, the Terebi Okekai (TV Draw) or Sega Graphic Board was a drawing tablet controller for Sega SG-1000, SC-3000 and Master System. The first touch-based game controller, as part of a simple drawing program, used a plastic pen on a glass resistive screen where you could draw with 15 colours. The glass screen was transparent, allowing users to trace images. It was not very accurate, and the material choices on plastic on glass made for an uncomfortable drawing experience but it opened up the possibility of using touch screens for gaming in ways other than just pointing and clicking on images, as it inspired other later controllers such as the drawing tablet for Artist Tool for the PC Engine and the eventual creation of dedicated digital drawing tablets, such as the Digitizer Mark-III used by Sega in the production of digital graphics for their games. This technology was used by Sega in the arcade game World Derby, the first touch-based arcade game in 1987. 

1987 was also the year of the first computer to feature handwriting recognition, the Linus Technologies WriteTop. It used a resistive touch screen and used the wired stylus to record the change in voltage on the screen. Aimed at use in healthcare as a way of digitising doctors notes, it was a portable tablet that used the stylus to write on an area of the screen the size of 3-by-5 -inch note cards complete with lines and could digitise the notes at a rate of 5 characters per second. It was also marketed for insurance and sales but was less popular. The inventor Ralph Sklarew demoed his handwriting software to Microsoft, Apple and Grid (where Jeff Hawkins was working before founding Palm Computing) to find funding for a smaller pocket-sized version of the WriteTop but these companies were all already working on their versions of the technology, which would become the Apple Newton and PalmPilot. 

But before fully touch screen electronic organisers were invented, they had integrated touchpads, as seen on the Sharp Wizard in 1989. It features a resistive touch panel below the screen with different overlays for the different programs on the device, allowing for the use of command shortcuts, with up to 20 ‘buttons’ available per overlay.  Named IC cards, they also extended the programs available on the device, to include a thesaurus, a time and expenses manager, language translators and even games. Sharp would later partner with Apple for the Newton.

The 1990s

In 1991, David Martin and Nancy Knowlton of SMART Technologies began developing their Interactive Whiteboards. These were large scale LCD touchscreens connected to a computer designed for use in offices for collaborative work, activated by a Stylus pen, intending to create a device combining a whiteboard and a computer. The LCD (Liquid Crystal Display) screen was chosen as it allowed for the lower profile which allowed for the screen to be hung on the wall. 

Development on touch screens in handheld devices took off in 1992, with the announcement of the first-ever smartphone, the Simon Personal Communicator from IBM. This was the first phone to feature a touch screen, operated with a stylus, allowing access to emails and faxing on the go. Although software delays lead to Simon not being released until 1994, and poor battery life making its portability questionable, it marks an important step in the evolution of touch screen devices. Another reason for the unpopularity of the Simon was Apple’s release of the Newton PDA in 1993. This was the first handheld touchscreen device to hit the consumer market and was the first device to be referred to as a Personal Digital Assistant, PDA, with the term being coined by the Apple CEO John Sculley. The Newton was designed to be a new class of computer, one that fitted into your pocket, but with the inclusion of a touch screen, it could do even more. This was demonstrated by the inclusion of its handwriting recognition software, the first mainstream occurrence. This struggled, as it was designed based on the lead designer’s handwriting and instead of testing it against other handwriting, the other designers adjusted their handwriting to match. Although this was the first, the more popular Palm range soon took over the market with the Palm Pilot in 1996. But even with the improvement in reliability and sensitivity of the touch screen, PDAs would soon move to a model featuring both a built-in QWERTY keyboard and touchscreen for optimal performance. The term PDA would eventually be recycled from being the device to software, such as the digital assistants Siri, Cortana and Alexa.

Sega continued their development of touch screen gaming with the Sega Pico, released in 1993. This was an educational console aimed at children aged 2-8 years old and used a touchpad and stylus along with 4 buttons for user interactions. It was the first home console to use touch as the default control system. This was one of the most successful educational consoles ever produced, as although it was discontinued in North America and Europe in 1998, it was sold in Japan until 2005, where it was succeeded by the Advanced Pico Beena. The Advanced Pico Beena included the addition of multiplayer with the option of adding a second magic pen, saving game progress with the SD card slot as well as introducing touch onto every page of the storyware cartridges. 1997 saw the release of the first games console to include a touch screen with the Game.com from Tiger Electronics. It was released as a competitor to the GameBoy, but with the addition of a touch screen, internet connectivity, as well as some popular features from PDAs like a calculator and a calendar. The screen itself was divided up into a visible grid, to help players with accuracy when playing. The grid was

At the University of Delaware, 1999 saw the founding of the company FingerWorks by Wayne Westerman and John Elias, intending to create multi-gesture input devices. Originally focused on creating low impact touchpad keyboards and other devices for those with hand disabilities, such as Westerman’s carpal tunnel syndrome. Their patents for touchpads would eventually be acquired by Apple to produce the touchpads for their Powerbook laptops, and eventually lead to Apple acquiring the company in 2005.

The 2000s

As the new millennium approached, companies were pouring more resources into integrating touchscreen technology into their daily processes. 3D animators and designers were especially targeted with the advent of the PortfolioWall. This was a large-format touchscreen meant to be a dynamic version of the boards that design studios use to track projects. Though development started in 1999, the PortfolioWall was unveiled at SIGGRAPH in 2001 and was produced in part by a collaboration between General Motors and the team at Alias|Wavefront. The PortfolioWall used a simple, easy-to-use, gesture-based interface. It allowed users to inspect and manoeuvre images, animations, and 3D files with just their fingers. It was also easy to scale images, fetch 3D models, and playback video. A later version added sketch and text annotation, the ability to launch third-party applications, and a Maya-based 3D viewing tool to use panning, rotating, zooming, and viewing for 3D models. For the most part, the product was considered a digital corkboard for design-centric professions.

2001 saw the demonstrations of the first touchscreen smartwatch, the Linux Watch by IBM. The watch uses the ADS 7843 (BurrBrown) touch screen display, using which you can map the whole screen which is split into four quadrants. A touch on the upper screen panel is captured and depending on the coordinates captured, the corresponding application is run. With the touchscreen giving almost unlimited options of input, only restricted by the memory in the device, as opposed to the limited buttons of previous smartwatches began the rush to create interactive wearable technology. 

In 2002, Sony introduced a flat input surface that could recognize multiple hand positions and touchpoints at the same time. The company called it SmartSkin. The technology worked by calculating the distance between the hand and the surface with capacitive sensing and a mesh-shaped antenna. Unlike the camera-based gesture recognition system in other technologies, the sensing elements were all integrated into the touch surface. This also meant that it wouldn’t malfunction in poor lighting conditions. The ultimate goal of the project was to transform surfaces that are used every day, like your average table or a wall, into an interactive one with the use of a PC nearby. However, the technology did more for capacitive touch technology than may have been intended, including introducing multiple contact points. More than two users can simultaneously touch the surface at a time without any interference. Two prototypes were developed to show the SmartSkin used as an interactive table and a gesture-recognition pad. The second prototype used finer mesh compared to the former so that it can map out more precise coordinates of the fingers. The technology was meant to offer a real-world feel of virtual objects, essentially recreating how humans use their fingers to pick up objects and manipulate them.

2004 saw Andrew D. Wilson, an employee at Microsoft Research, develop a gesture-based imaging touchscreen and 3D display. The TouchLight used a rear projection display to transform a sheet of acrylic plastic into an interactive surface. The display could sense multiple fingers and hands of more than one user, and because of its 3D capabilities, it could also be used as a makeshift mirror. The TouchLight was a neat technology demonstration, and it was eventually licensed out for production to Eon Reality before the technology proved too expensive to be packaged into a consumer device. However, this wouldn’t be Microsoft’s only foray into multi-touch display technology.

2004 was also the year of the commercially successful introduction of touchscreens into the handheld gaming market, with the launch of the Nintendo DS. The successor to their GameBoy, the DS features to screens, the lower of which was a resistive touchscreen that could be activated by your finger or using the included stylus, allowing for more direct in-game element interaction. The commercial success of the DS has been attributed to the touch screen experience, as compared to the Sony PSP released at the same time, as well as the backwards compatibility, second display and wireless connectivity. Later members of the DS and then the 3DS family achieved improvements in resolution and reactivity of the touch screen while the launch of the Wii U GamePad reintroduced the touch screen to the home console.

Resistive touch screens dominated the hand-held market up until 2007 when Apple acquired the patents for the PCAP, a projected capacitive touch panel, for use in the original iPhone due to their acquisition on the team at FingerWorks. This was the first entirely touch screen phone and was the start of the interactive revolution. This was in part due to the introduction of successful multi-touch on a screen, which opened the door for far more complex interactions with the screen, such as pinch and spread motions for changing the zoom, scrolling, swiping and rotating. PCAP touch screens are also more sensitive and thus more accurate, and with advances in technology, more scratch-resistant as they can be made of tougher tempered glass.

Before there was a 10-inch tablet, the name “Surface” referred to Microsoft’s high-end tabletop graphical touchscreen. With development beginning in 2001 and released in 2008, it was originally built inside of an actual IKEA table with a hole cut into the top. Researchers at Redmond envisioned an interactive work surface that colleagues could use to manipulate objects back and forth. The Microsoft Surface was essentially a computer embedded into a medium-sized table, with a large, flat display on top. The screen’s image was rear-projected onto the display surface from within the table, and the system sensed where the user touched the screen through cameras mounted inside the table looking upward toward the user. As fingers and hands interacted with what was on screen, the Surface’s software tracked the touchpoints and triggered the correct actions. The Surface could recognize several touchpoints at a time, as well as objects with small “domino” stickers tacked on to them. Later in its development cycle, Surface also gained the ability to identify devices via RFID.

The 2010s

2010 saw the release of Apple’s iPad, creating yet another market for touchscreen devices. The release fulfilled the desire expressed by Steve Jobs in a speech in 1993. “What we want to do is we want to put an incredibly great computer in a book that you can carry around with you and learn how to use in 20 minutes … and we really want to do it with a radio link in it so you don’t have to hook up to anything and you’re in communication with all of these larger databases and other computers.” 

in 2012, there were several touchscreen smartwatches revealed, which really increased the interest and began extending the features they were capable of. Sony launched the Sony SmartWatch, as a companion to the Xperia Smartphone. The Pebble, which was funded through a Kickstarter, was created by the team who have previously designed the Blackberry range of smartwatches. The Truesamrt, also funded through Kickstarter, was announced which claimed to be the first wearable smartphone. All these shared similar features but most importantly, shaped the image of what you could expect from a touchscreen smartwatch, in that it actually looked like a traditional watch. By 2013 it was reported that almost every tech company was developing a smartwatch and other wearable tech, such as Google Glass. Pebble was bought out by Fitbit to acquire the software to create their range of touchscreen fitness smartwatches, while the launch of the Apple Watch in 2015 set the standard for other smartwatches to rise up to.

Microsoft rebranded the original Surface technology as PixelSense in 2013 with the release of their updated 10″ Surface tablet computers, which are still in production today, albeit with much higher specs and a far more responsive and higher resolution screen. The name “PixelSense” refers to the way the technology actually works: a touch-sensitive protection glass is placed on top of an infrared backlight. As it hits the glass, the light is reflected back to integrated sensors, which convert that light into an electrical signal. That signal is referred to as a “value,” and those values create a picture of what’s on the display. The picture is then analyzed using image processing techniques, and that output is sent to the computer it’s connected to.

Today

In the latter half of the 2010s, it seemed like any device that could have a touch interface was given one. From the touchpad on the Playstation 4s DualShock controller to the rise of touchscreen and 2-in-one laptop devices to even the rise in touch point of sale devices and event show displays. The capabilities of touch screens have also and continue to improve with leaps and bounds, from the developments in haptics that allowed Apple to phase out the physical home button on the iPhone before removing it completely to the introduction of successful folding phones by Samsung, Motorola and Google. 

The future of touch screens looks bright.

Interested in getting a touch screen display for your business? Check out our range of interactive screens and tables here, contact us for more information or even book a free demo of our own interactive touch screen tables.

Sources

Share This Page:

Talk to Marco!

Our expert team are happy to  discuss any queries you have.

 

 

Scroll to Top