Slatedroid info

Everything about android tablet pc [slatedroid]

Quantum Dot is promising for more than just displays

Posted by wicked September - 10 - 2015 - Thursday Comments Off

Quantum Dot technology is shaping up to be the next big step forward in LCD display technology, although we are yet to see our first smartphone implementation yet. The technology will likely be appearing in more devices over the coming years, but the science behind these new displays also boasts promising properties for other applications. We’re going to take a quick look at image sensors and spectrometry.

What is a Quantum Dot?

Before we begin, a quantum dot is a small nanocrystal made from various conducting materials, typically in the range of between 2 to 10 nanometers in diameter. They exhibit semiconductor properties and are most widely known for their ability to emit light of different colors. This was first studied by Michael Faraday back in 1857.

Properties, such as light emission, are directly linked to the size of the nanocrystal. This is useful to know when it comes to displays, cameras, and light detection technologies, as it allows for a quantum dot to be manufactured at a specific size in order to work with a very specific frequency of light. This allows for the creation of the RGB colors required for displays and could also be leveraged to create color detecting pixels for image sensors.


The size of QD nanocrystals determines the color emitted by the colloidal. Source.

New use cases

Although displays may have grabbed much of the early attention with Quantum Dots, the technology is also highly suitable for a variety of sensor implementations. By configuring Quantum Dots as absorptive filters with specific bandpass ranges, it is possible to use them to detect specific wavelengths of light, turning the use case from a display to a light sensor.

As a Quantum Dot produces one color from a light source of a higher power, it essentially acts a filter. In other words, blue light can activate a red QD as it has more power, but a red light could not activate a blue QD. Using a series of known filters and light detecting sensors, it is possible to figure out what frequency of light is pointed at the sensor. Examples have already been prototyped into an image sensor using 195 different broadband QD filters.


Don’t be fooled by the marketing terms. The LG G4’s Quantum Display does not make use of Quantum Dot technology.

Quantum Dot filters can be finely tuned across a huge range of wavelengths, from deep violet to near-infrared wavelengths, which would be useful as a spectrometer. While spectrometers are already in use technology, the complex and large nature of the components makes them expensive. Quantum Dot based image sensors could be produced in smaller form factors and at much lower costs, enabling new products for consumer and industrial applications.

Light spectrum colors

Using Quantum Dots as filters can provide information about objects. Source.

Currently, spectrometers are used in biomedical research, forensic science and chemical detection fields. An infrared spectrometer can be used to analyse the elements of a compound though molecular vibrations, while ultraviolet light can be used to detect electronic excitations. The visible spectrum in between applies to what we can see with our own eyes, and spectrometers can detect these levels very accurately too.

Quantum Dot image sensors could lead to compact consumer products. These could include portable medical or self-diagnostic tools to help diagnose skin conditions, analyse urine samples or track pulse and oxygen levels.

Development could also increase access to information for more seemingly mundane tasks, such as evaluating fabric or paint samples in a store to see how well they match up with other colors in your home.

samsung galaxy s6 edge vs lg g4 aa (20 of 28)See also: QuantumFilm image sensors explained33739

Jie Bao, a former MIT postdoc and currently a Visiting Associate in Physics at Caltech, suggests that colloidal quantum dot materials can be applied to a sensor array using a variety of techniques, including ink jet printing or direct printing, which would be quite cost effective. Furthermore, implementation in consumer electronics may not even require the 195 dots already proposed for such a sensor. A reasonable system may be able to get away with a dozen or so dots spread throughout the spectrum, to provide enough information and accuracy for most consumer applications.

If such a Quantum Dot image sensor can be manufactured at a reasonable cost, in the future we may see high-resolution image sensors powering a range of spectrometers found in industrial, scientific and consumer grade products. The technology is suitable for much more than just displays, and TVs are just the beginning.

Sony developing 1,000 fps image sensor for intelligent computer sensing

Posted by wicked September - 7 - 2015 - Monday Comments Off

Sony XMOR RS Sensor Xmor G Lens Close up Image Sensor-3

Sony is the market leader in the image sensor business and the company is looking to maintain a significant lead over its competitors with new technologies. One of the latest is Sony’s research into an affordable 1,000 fps capable image sensor, which is being developed in conjunction with Nissan Motor Co. and Masatoshi Ishikawa, a Tokyo University professor.

The new 1000 fps sensor has been developed by stacking the circuit and sensor parts for faster speeds and a high resolution, rather than placing the components side-by-side. Sony has been able to reach speeds over 900 fps with some prototypes, while your typical modern smartphone camera sensor might be capable of slow motion video capture at 120 frames per second.

However, this isn’t really a fair comparison as these fast image sensors aren’t necessary for capturing the perfect picture or home video, but they do open up development of new technologies that make use of intelligent computer sensing. The work being conducted with Nissan could enable driverless vehicles that can quickly detect and avoid hazards, or be put to use to develop faster industrial manufacturing methods.

“The images for sensing require a different kind of chip, and the challenge is converting technologies that make beautiful photos to new uses.” – Shinichi Yoshimura, Sony

High speed image sensors can also play an important role in lowering the cost of advanced gesture recognition systems. Such technologies at an affordable price point could find use in a wide range of consumer and industrial applications, including wearable gadgets and other mobile products.

“High-speed image sensors are a niche industry, but Sony has the power to take it mainstream … And that may be just two years away.” – Masatoshi Ishikawa, University of Tokyo

1,000 fps image sensors already exist but are hugely expensive and relatively large, which prohibits their widespread use. These type of sensors cost anywhere from $1,000 to $100,000 from companies including Sony and Vision Research Inc. By adapting its existing mobile image sensor technology, Sony should be able to produce competitive chips at a fraction of previous sizes and costs.

Sony anticipates that image sensor sales could climb as much as 62 percent to 1.5 trillion yen in three years. However, the company also expects that its rivals will catch up with its mobile image sensor technology, so finding new markets will be key in order to stay ahead. Sony is apparently investing €1.5B ($1.7B) in its image sensor operating in FY16, five times the amount that it invested in FY15.

This new sensor technology may help Sony diversify its sensors into new markets and could result in some exciting new products for us consumers. Definitely something to key an eye on.

QuantumFilm image sensors explained

Posted by wicked August - 20 - 2015 - Thursday Comments Off
samsung galaxy s6 edge vs lg g4 aa (20 of 28)

Smartphone cameras have come a long way recently, with sensor and lens setups in some of this year’s flagships offering up some seriously good looking results, just look at the Galaxy S6 or the LG G4 for example. However, even the best smartphone cameras still suffer from limited versatility, often have poor low light performance, and heavy noise and crosstalk when compared with higher end sensors found in DSLR cameras.

Furthermore, the resolution race has seen increasingly high-resolution cameras in smartphones, but our testing and experience has shown us that the cameras with the most pixels aren’t necessarily producing the best results. That being said, HTC’s attempt to buck this trend with its Ultrapixel technology failed to produce superior results either. The fact of the matter is that sensors, and therefore pixel sizes, in smartphones are limited by their compact size.

samsung galaxy s6 edge vs lg g4 aa (20 of 28)See also: LG G4 vs Samsung Galaxy S6 / S6 Edge – Camera Shootout3415140

InVisage, a fabless semiconductor company, is planning to bring its unique QuantumFilm technology to market, which might provide a big leap forward in image quality for small form factor mobile devices.

The problem

The crux of the issue is down to the compromises made with module size and light capture. For a little background, modern CMOS image sensors are built up of lots of sensor/pixel cells, each configured with a filter to detect how much red, green, or blue light is in the scene and in which locations. But these sensors aren’t perfect, there is a certain amount of reflection and loss as light enters a sensor and there can also be cross talk between adjacent cells and electronic interference, which manifests as noise and color artifacts.


Low light pictures often expose the weaknesses of a sensor.

These problems are more pronounced in compact smartphone sensors, as the cells are smaller and packed in closer together. Further increasing the resolution of a sensor compounds these problems, leading to more noise and worse performance in low light conditions.

The image sensor industry has come up with a number of innovations to help combat these problems. Moving over from frontside to backside illumination sensors helped reduce loss as the light reached the base of the cell, while Samsung’s Isocell aimed to better insulate nearby cells from each other, resulting in less crosstalk. These are fine solutions, but don’t completely eliminate the aforementioned probems.

QuantumFilm’s solution

QuantumFilm image sensor section

InVisage’s QuantumFilm technology aims to address these problems by tweaking traditional sensor designs to make use of its own light sensing layer, which promises to capture more light and avoid crosstalk. Much of the design remains the same as today’s CMOS sensors, instead it is the QuantumFilm layer that is of particular interest.

Rather than using silicon photodiodes, InVisage’s sensors use their own metal-chalcogenide quantum-dot film to capture much more light near the surface of the sensor. This film is built from quantum dots, a small nanocrystal with quantum mechanical properties, arranged in a colloid, a solution made up of evenly distributed small particles.

CMOS BSI vs QuantumFilm

This layer is connected in between the usual filter layer and the electrode circuitry. When a certain color of light reacts with the QuantumFilm layer, the circuit can detect the region in which this reaction occurred to determine the pixel’s color. This way, the camera’s resolution does not affect the amount of light captured in the way that traditional CMOS sensors do and there’s apparently less crosstalk than solutions which require larger photosensitive cells. In other words, the resolution of the filter layer and density of the detecting circuitry determines the resolution, while the film layer remains unchanged.

The video below offers a pretty comprehensive explanation of what the company wants to achieve, without the techno-babble.

This whole idea seems rather well suited to smartphones, where compact hardware is essential. QuantumFilm has a few benefits in this regard, as it can be produced at very thin sizes, cutting up to 0.8mm off the very smallest CMOS sensors, which is a small, but valuable space saving inside a smartphone.

QuantumFilm absorption strength

Furthermore, QuantumFilm boasts a light absorption capacity up to eight times greater than some silicon CMOS sensors, allowing for greater dynamic range and better low light shots, less noise, and it can also be used for infrared light detection, opening the door for new and interesting compact product ideas.

How soon?

Like many other up and coming pieces of technology, the big problem with QuantumFilm is that it remains untested in real world consumer products. There has been a lot of talk for a number of years, but nothing for us to really sink our teeth into.

As a small company, InVisage is currently only producing a small number of wafers, but is looking to ramp up production in the second half of this year. TSMC will be helping InVisage further increase production with additional capacity next year.

We are still probably in for quite a wait until the first smartphones appear sporting the technology, but QuantumFilm is certainly something to keep an eye on.

Google and MIT researchers demo their photo reflection removal algorithm

Posted by wicked August - 5 - 2015 - Wednesday Comments Off

Smartphone camera lens close up ShutterStock

I’m sure we have all witnessed those pesky reflections while trying to grab a photograph through a window, but those days may soon be behind us, thanks to research conducted by Google and MIT. The group presented a paper at Siggraph 2015 and has published a video demonstrating its algorithm for removing reflections from your pictures.

The software isn’t just good for reflections though, it can also be used to analyse and remove other obstacles from your pictures, such as raindrops on the glass and even a chain-link fence that partially obstructs your view. It’s not 100 percent perfect, but seems to do a pretty good job at mostly removing these annoyances in a wide range of scenarios, include tough low-light scenes.

MIT and Google reflection removal 2

The developers state that the algorithm works using a short video clip that could, for example, be capture from your phone. At this stage, the algorithm sorts out the depth of the scene using edge detection differences in the successive frames and can figure out any obstructions in the foreground. A somewhat similar idea is used for techniques like post processing depth of field adjustments and 3D parallax images, which rely on multiple points of view.

From here, the software can fill in the obstructed space with information from other frames, resulting in a clearer final picture. One creepy “side effect” of the technology is that it can also quite accurately recreate a clear image of whatever is contained within a reflection or occlusion.

The video below has a really detailed explanation about how this is accomplished and a few more examples, which is well worth a watch if you’re keen on details.

This type of technique has been tried before, but previous result have been rather mixed. Google and MIT’s implementation seems the best so far. Unfortunately we don’t know if or when this type of technology will become available for smartphone cameras. Here’s hoping that someone picks up the idea and brings it to consumers.

A look at PowerVR’s Ray Tracing GR6500 GPU

Posted by wicked July - 24 - 2015 - Friday Comments Off

Imagination-CI20 (3)

Last week, Imagination Technologies announced that its GR6500 ray tracing GPU taped out on a test chip, a major milestone on its way into a mobile product. The GR6500 is unique, as this is Imagination’s first ray tracing GPU based on its latest PowerVR Wizard architecture. A series of articles released this week explain exactly what’s behind this technology, so let’s delve into the key points.

Ray tracing, for those unfamiliar with the term, is a method of modelling lighting in a virtual 3D space, which aims to closely mimic the actual physics of light. The method is in the name, the technique “traces” the path of light rays through the 3D space to simulate the effects of its encounters with virtual objects and collects this data for the pixels displayed on screen. It can produce highly realistic looking lighting, shadows, reflection and refraction effects, and is sometimes used in 3D animated movies.

As you can probably imagine, there can be a ton of different light sources to calculate using this method, and figuring them all about is extremely computationally and memory expensive, so games developers opt for cheaper simulations like rasterized rendering. However, you can severely cut down on ray tracing processing time by using dedicated hardware, which is what Imagination Technologies has done with its PowerVR Wizard GPU.

PowerVR GR6500

The GR6500 features a dedicated Ray Tracing Unit (RTU), which calculates and keeps track of all the data. As for what the RTU actually does, it first creates a database representation of the 3D space and tracks where the rays intersect with the geometry.

“We approached the problem differently. While others in the industry were focused on solving ray tracing using GPU compute, we came up with a new approach leveraging on our prior expertise in rasterized graphics”– Luke Peterson, Imagination’s director of research for PowerVR Ray Tracing

Ray Tracing graphics exampleWhen running the shader for each pixel, the RTU searches the databased to find the closest intersecting triangle in order to figure out the color of the pixel. This process can also cast additional rays to simulate reflective properties, which in turn will affect the color of other pixels. Keeping track of the secondary rays is hugely important too and it’s all kept in the RTU to improve performance.

Ok, so what does this actually mean in terms of performance, graphics and games?

Ultimately, reaching closer to photorealistic graphics is the aim of the game, but this can take a number of forms, from accurate reflections to lighting and shadows. Compared with GPU compute or software based ray tracing approaches, the use of dedicated hardware makes the GR6500 up to 100 times more efficient. Hence why traditional GPUs depend on different approaches. This huge reduction in processing costs opens up new avenues for optimized ray tracing based graphics effects in mobile titles.

Imagination Technologies gives an example comparison of ray traced vs traditional cascaded shadow maps. You can read all about the technical details in the official blog post, but the short of it is that the ray tracing and penumbra simulation method produces much more accurate shadows than the rougher approximation technique of cascaded shadow maps. This is essentially because of the way ray tracing simulates light passages accurately regardless of the distance, while shadow mapping is limited to a more finite resolution and distance scaling to maintain performance.



Furthermore, using the hardware based technique reduces memory traffic compared with cascaded shadows. In one test, a single scene used up 233MB of memory for cascaded shadows compared with 164MB for ray tracing. Subtract the “G Buffer” setup cost of the scene and ray tracing can result in a 50 percent reduction in memory traffic. Given that memory bandwidth is a limiting factor in mobile GPUs, especially when compared with desktop GPUs, this reduction can give quite a nice boost to performance as well.

In terms of frame time, Imagination Technologies’ example shows an average reduction of close to 50 percent. So not only do ray traced shadows look better, but they can also be implemented with a higher frame rate than cascaded shadows, thanks to the use of dedicated hardware.


There is one point worth noting though and that is that it’s up to developers to implement these type of effects in their games. With only a small selection of compatible hardware heading to the market any time soon, we probably won’t see the benefits for a while yet.

However, someone has to take the first step, and Imagination Technologies GR6500 GPU may be the starting point for some much more visually impressive mobile graphics a little way down the line.

WiFi Aware enables instant local device communication

Posted by wicked July - 15 - 2015 - Wednesday Comments Off

WiFi coffee shutterstock ShutterStock

WiFi may empower us to do clever things but the technology itself isn’t that smart. It doesn’t communicate anything meaningful until a full connection has been established, which means that it can’t tell us useful pieces of information about the service that we want to connect to. However, the newly unveiled WiFi Aware specification aims to change this.

WiFi Aware enables certified products to discover and communicate with other nearby devices without relying on an internet connection, sort of like Bluetooth Low Energy or Qualcomm’s LTE Direct. Devices will continually broadcast small packets of information, which could allow applications to push notifications to other devices or provide information to a user about a nearby service, person or business, all before making a regular WiFi connection.

WiFi Aware is part of the growing trend towards smaller hubs connected up to the larger Internet of Things. This could power all sorts of simple conveniences, such as turning on you lights when you’re in range of your home WiFi or finding a nearby shop which stocks your favorite products.

A key part of the idea is contextual data. We don’t want to be bombarded with all of the information from variously nearby networks. Instead, users will have control over the type of data they are alerted too and the data that push to other devices, a filter if you will. WiFi Aware devices know about everything in close proximity, but only connects to relevant sources of information.

Privacy and the impact on battery life are sensible concerns, but Edgar Figueroa, President of the WiFi Alliance, says that WiFi Aware is very power efficiency and consumes less energy than traditional WiFi. As for privacy, apps that use the service will have opt in/out settings, and the lack of an instant Internet connection offers some extra protection.

The first wave of WiFi Aware gadgets and applications not here just yet, but Broadcom, Intel, Marvel and Realtek already have certified chips for future gadgets. Social networks could roll out applications with Wi-Fi Aware before the end of the year.

samsung galaxy s6 16

There are enough people out there that think Quad HD on a smartphone is ridiculous and completely unnecessary, and we’re guessing they won’t be too happy to hear that Samsung is already thinking beyond Quad HD. Way beyond.

According to Korea’s Electronic Times, Samsung has challenged itself to build display of an unprecedented resolution: 11k, for a pixel density of 2,250 pixels per inch. Samsung has not offered exact specifications, but throwing the numbers in a DPI Calculator shows that achieving 2,250 ppi on a mobile (5.5-inch) display would require a resolution of approximately 11,000 by 6000 pixels.

That’s absolutely amazing, given that today’s best smartphone displays offer Quad HD (2560×1440, 530ppi on a 5.5-inch screen), while the next big step is 4K (3840×2160, 800 ppi on a 5.5-inch).

The project was announced by Samsung Display executive Chu Hye Yong during a workshop in Korea. Samsung is teaming up with 13 Korean and foreign companies for this moonshot and is enlisting the help of the Korean government.

“We are hoping that we are able to show such technologies at Pyeongchang Olympics if there is a progress in developing technologies. Although some might think that 11K as ‘over specification’ that consumers do not need, this can work as a basis for Korean display industry take another leap if related materials and parts improve through this,” said Chu.

Don’t expect the project to bear fruit anytime soon. The goal is to show a working prototype of the new display by the 2018 Pyeongchang Winter Olympics.

Samsung Galaxy S6 Edge-35

Many believe that even the Galaxy S6 Edge’s Quad HD screen is overkill…

Okay, but why?

Now, for the key question – why would Samsung want to develop such an extremely dense display? It’s for 3D. When you have so many pixels to work with, you can create 3D effects without the need of special glasses or other cumbersome techniques.

But it’s the rise of VR that could really push display manufacturers towards new limits of pixel density. When you strap a display a few inches from your eyes, you can’t have too many pixels per inch. The Oculus Rift, due to launch next year, offers a resolution of 2160 x 1200, and, even if you can notice the pixelation, the experience can be amazing. Now imagine what you can do with ten times as many pixels.

If there’s any entity in the world that is able to create a display that is three times as dense as 4K, it’s Samsung. Of course, processors and batteries will need to keep up. For now, Full HD remains the standard spec, Quad HD appears in some of the nicer phones out there, and 4K displays are probably in the labs, waiting for their place in the spotlight. For more info on 4K, the manufactures that are working on it, and its effects on the industry, check out our comprehensive look at the present and future of 4K technology.

Korean researchers can prevent flash memory decay and lengthen battery life

Posted by wicked July - 10 - 2015 - Friday Comments Off

Samsung Mass Producing ePop Memory

Wouldn’t it be great if our technology didn’t suffer from eventual slowdowns with age? Well, Hanyang University researchers have announced a new technology that could slow down the rate of decay, boost performance and increase smartphone battery life. The technology is named WALDIO, or Write Ahead Logging Direct IO.

The idea all revolves around internal NAND flash memory used for storage. Flash memory has a limited number of write/erase cycles and ages with use. Constant writing eventually results in dead sectors, which slows down the reading and writing process used by all applications. Furthermore, reading and writing to flash is an energy consuming process, so optimizing these type of tasks could increase smartphone battery life.

“This tech will make it possible to use low-priced flash memory for a long time, like expensive flash memory.”  – Professor Won You-jip

The problem, as the researchers see it, is related to the SQLite database management system and a number of unnecessary writes to storage within the Android IO stack, which degrades flash memory faster than necessary. The researchers want to dispense with the expensive file system journaling, without compromising the file integrity.

Without the jargon, WALDIO simply records smaller amounts of data to flash memory in order to preserve its lifespan.

WALDIO aims to optimize SQLite IO using block pre-allocation with explicit journaling, header embedding and group synchronisation. You can read all about it in greater depth in the published PDF, but it’s certainly not light reading.

WALDIO performance

Testing using a Samsung Galaxy S5 revealed a significant reduction in the total IO volume when performing a number of operations. Volume was reduced to around 1/6 of the original size. The test also demonstrated up to 4.6 times faster command performance over the default methods, freeing up time to do other things with the memory. In terms of what this means for us users, our smartphones could operate up to 20 times faster at certain tasks and battery life could be extended by 39 percent or more.

Unfortunately, we don’t yet know if WALDIO could be implemented on existing devices or even if the technology will ever make its way into a commercially available device. The team will be presenting their study at the Usenix Annual Technical Conference in Santa Clara today, and will hopefully grab some attention.

IBM announces the world’s first working 7nm chip

Posted by wicked July - 10 - 2015 - Friday Comments Off

IBM, in conjunction with GlobalFoundries, Samsung and SUNY, has unveiled the world’s first successful production of a 7nm FinFET chip with fully working transistors. The achievement comes as part of a $3 billion, five year research program spearheaded by IBM, which aims to push the limits of chip technology.

Today’s leading mobile processors are built on 14nm and 20nm manufacturing techniques. This 7nm breakthrough will eventually lead to smaller, faster and more energy efficient processors. However, before we go any further, it’s important to note that we are still years away from any potentially viable mass manufacturing techniques at 7nm.

Just yesterday we were talking about the ongoing race to 10nm, but to reach even smaller chip sizes we’re going to need new manufacturing techniques and new materials, plain old silicon just won’t cut it here. This is where IBM’s research comes in.

For a little background, one of the difficulties associated with smaller transistors (the electronic switches that form that basis of processors) is that the number of electrons able to squeeze through the transistor (aka the current) is also reduced, which increases the chance of errors. To combat this issue, IBM mixed some germanium into the channel, producing a silicon-germanium (SiGe) alloy with higher electron mobility, thereby improving the current flow. SiGe also helps to keep power consumption low and transistor switching at high speeds.

SUNY College of Nanoscale Science and Engineering's Michael Liehr, left, and IBM's Bala Haranand look at wafer comprised of 7nm chips on Thursday, July 2, 2015, in a NFX clean room Albany.   Several 7nm chips at SUNY Poly CNSE on Thursday in Albany.  (Darryl Bautista/Feature Photo Service for IBM)

SUNY College of Nanoscale Science and Engineering’s Michael Liehr, left, and IBM’s Bala Haranand look at wafer comprised of 7nm chips

The other half of successfully producing such small chips is actually developing manufacturing tools detailed and accurate enough to etch out your processor design on such a small scale. IBM made its chip using EUV lithography, which uses a wavelength of just 13.5nm to etch out chip features. This is substantially smaller than the 193nm wavelength of state of the art argon fluoride lasers used at 14nm.

However, EUV is still expensive and difficult to use, making its suitability for time constrained mass production questionable. The tiniest errors at this size can completely undermine production, so expensive stabilizing isolation equipment and buildings are required to protect the manufacturing tools from vibrations. Some observers are concerned about the diminishing savings associated with ever smaller processors, as difficult and more expensive manufacturing techniques eat into the cost benefits of being able to squeeze more chips onto the same silicon area.

It is still too early to say when 7nm mass production capabilities will be ready, but firms have their sights set for sometime around 2017/2018.

Researchers develop the first skin-like flexible display

Posted by wicked July - 1 - 2015 - Wednesday Comments Off

Dr Debashis Chanda, University of Central Florida

Flexible display technology has been gradually turning up in high-end gadgets, but research into even more flexible solutions is showing no signs of slowing down. A research team from the University of Central Florida, led by Professor Debashis Chanda, has developed the first-ever skin-like colour display, which is thin and flexible enough to be used alongside fabrics.

The research team’s technique could open the door to thin, flexible, full-color displays that could be built into plastics and synthetic fabrics. The technology is only a few micrometres (um) thick. That is considerably smaller than a human hair, which is typically around 0.1mm thick.

“This is a cheap way of making displays on a flexible substrate with full-color generation … That’s a unique combination.” – Dr. Chanda

The research team says that you could do all sorts of fun things with it, such as create a shirt than can change its colors and patterns on demand. If you’ve played any of the recent Metal Gear Solid games, it’s not far off from that crazy camouflage suit.

Inspiration apparently came from nature. Specifically, animals such chameleons, octopuses and squids, which can change their colors just by using their skin.

How does it work?

Traditional display components found in TVs or smartphones require a dedicated light source, be it an OLED or back-light. However, this color changing flexible display does not require a light source of its own, instead it reflects light back from its own surface.

flexible color displays

Dr. Chanda used a National Geographic photograph to demonstrate the technology.

This technique is accomplished using a thin liquid crystal layer placed over a metallic egg-carton shaped nanostructure. This shape absorbs some light wavelengths and reflects others, and the reflected light can be adjusted by passing different voltages through the liquid crystal layer, not unlike a regular LCD display.

Previous attempts at this type of display have resulted in limited colors and thicker designs. It’s the nanostructured metallic surface, as well as the interaction between liquid crystal molecules and plasmon waves, that enables the new design to offer such a wide range of colors with just a tiny footprint.

Potential products

As well as color changing clothes ranges, this type of technology could also improve on existing electronic products. TVs, laptops and smartphones could all be built with even thinner displays or new form factors. It also sounds like there’s potential for energy and cost savings too, which could help bring flexible display tech to the masses.

Not only that, but other types of wearable electronics could be born, such as fabric based wrist bands, and there’s potential for other types of displays that we probably haven’t seen yet. Of course, we’re going to need complementary developments in flexible and discrete processor, circuitry and battery technologies before any of these futuristic sounding products can end up on the market. But we’re getting there, one step at a time.