Slatedroid info

Everything about android tablet pc [slatedroid]

Watch out OLED here comes PCOLED!

Posted by wicked November - 26 - 2015 - Thursday Comments Off

Display component Shutterstock

RGB OLED may form the basis of a number of high end TV and smartphone displays, including new flexible designs, but the technology could one day be replaced by an improved Plasmon-Coupled Organic Light Emitting Diode (PCOLED) architecture. Taiwan-based ITRI has announced development of its PCOLED design, which could boost the lifetime of displays by up to 27 times.

PCOLD replaces the traditional red, green and blue phosphorescent color layers used to produce white light with a red, green and green plasmon-coupling phosphorescent design, complete with a double metal structure. This is still able to produce very similar while light as the traditional RBG design, but has major benefits when it comes to display life time.


The weak link in a display’s lifetime is usually the blue fluorescent layer, which has a substantially reduced lifetime compared with different colored layers. When display quality begins to decrease due to age, it’s usually the blue pixels to blame. By replacing the blue fluorescent layer with a green phosphorescent layer, the white light can still be produced but with the green layer now dictating the minimum material life time. This could be up to 27 times longer than before.

Dr. Ming-Shan Jeng offers an explanation as to how this works:

“In the green phosphorescent material, there is actually a blue emission band in addition to the green one, but it is very weak. With the double metal structure, we actually generate more plasmons and shift the probability for emission from the green to the blue band.”

Apparently, the discovery was made by accident while conducting another experiment. Since then, the ITRI has developed two display samples at 10×10 and 15×15 cm sizes and is currently looking for commercial partners as it continues to develop the technology. ITRI is already working with WiseChip to begin production of a PCOLED structure on a passive matrix OLED line. That said, we are still at least 1-2 years away from the announcement of any PCOLED based products.

Revolutionary material could cut smartphone display energy consumption

Posted by wicked November - 24 - 2015 - Tuesday Comments Off

bodle technologie zero power reflective display

Smartphone displays are one of the biggest drainers of our precious battery life but a new breakthrough invention being developed by Bodle Technologies could dramatically cut the amount of juice required by future displays in a wide range of gadgets. Rather than an LCD or OLED based displays, this new reflective display makes use of phase changing materials to reproduce vivid colors while consuming very little power.

The idea is based on the technology used in rewritable DVDs and works by using electronic pulses to change the color of the display’s materials. The material itself doesn’t consume any power, only requiring a brief charge to change its state and color, meaning that the energy needed to power a display in a smartphone or wearable could be cut quite substantially.

“You have to charge smartwatches every night, which is slowing adoption. But if you had a smartwatch or smart glass that didn’t need much power, you could recharge it just once a week.” – Dr Peiman Hosseini, founder of Bodle Technologies

Bodle Technologies is said to be in talks with a number of the world’s largest consumer electronics corporations, although none have been named for legal reasons. The company has also secured a “significant” amount of finance from the Oxford Sciences Innovation fund to boost further development.

“This technology is capable of providing vivid colour displays which appear similar to paper, yet with very high resolution. It is also capable of rendering extremely high-resolution videos that can be seen in bright sunlight.” – Dr Hosseini

As well as smartphones and wearables, this innovation is also thought to be useful for the emerging smart glass market, which is estimated to be worth approximately $2 billion by the year 2017. Such technology could be used to block infrared waves to help keep buildings cooler without the need for air-conditioning.

We are still quite a way off from the technology hitting any consumer level products, but it’s another promising innovation that could open up an entire new market for gadgets and beyond.

Researchers bring us one step closer to the ‘ultimate battery’

Posted by wicked November - 13 - 2015 - Friday Comments Off



A research breakthrough in lithium-oxygen battery development could now make the ‘ultimate battery’ a possibility, as a number of barriers to development appear to have been overcome.

Lithium-oxygen (Li-air) has been hailed as the base for the ‘ultimate battery’ due to its energy density benefits over current lithium-ion cells. Lithium-oxygen can offer ten times the theoretical energy density of current batteries, which would enable smaller, cheaper and longer lasting cells for gadgets or battery powered vehicles. The huge potential benefits with Li-air had been thought to be out of reach, but researchers appear to be getting closer to a viable solution.

Battery capacity expectations

Source: IDTechEX

Researchers from the University of Cambridge have demonstrated a new lithium-oxygen cell that is 90 percent more efficient and more stable than previous attempts, and can be recharged more than 2000 times. However, as with all these emerging battery technologies, there are a number of obstacles to overcome before we see anything close to a viable product.

As we are probably all too aware, battery technology has failed to keep pace with processors and other energy sapping components found in our gadgets, resulting in decreased use time. So we could use an alternative. Post-lithium batteries are also seen as important in the growing automotive and green energy storage industries, where large and therefore more expensive lithium-ion batteries are seeing increased demand. If lithium demand from these sectors grows as expected, a strain on supply could make existing battery technology more expensive, leading to a drive for alternatives.

Lithium-air batteries have become popular in research fields over the past decade, catching up with the likes of Sodium or Li-Sulphur. Other promising areas of research include Silicon Anode technologies, Lithium Capacitors and Solid-State batteries, but there are still compromises and technical issues left to overcome.

ZTE Blade S6 Plus aa battery

Ten times the battery capacity would be a major boost for smartphones, but would also be beneficial for the electric vehicle and green energy storage industries.

The difference between a lithium-oxygen and lithium-ion battery lies in the battery’s electrode. Rather than graphite, the researchers have developed their electrode using graphene, which you have probably heard talked about a lot before. The graphene is highly porous and is combined with lithium iodide to lower the voltage gap between charge and discharge to just 0.2 volts, making the battery more efficiency than previous implementations, which had a gap anywhere between 0.5 and 1 volt.

“While there are still plenty of fundamental studies that remain to be done, to iron out some of the mechanistic details, the current results are extremely exciting – we are still very much at the development stage, but we’ve shown that there are solutions to some of the tough problems associated with this technology,” – Professor Clare Grey of Cambridge’s Department of Chemistry

However, like some previous enhanced capacity battery research that we have seen, there’s a problem with lithium metal fibres, known as dendrites, which can form on the metal electrode, eventually leading to a short-circuit within the battery and possible explosions! The researchers are yet to find a way to protect the metal electrode from the dioxide, nitrogen and moisture in the air around the battery.

Unfortunately, this means that the team expects that we are still at least a decade away from seeing a truly practical design, but at least the technology now seems feasible. Unfortunately, our smartphones won’t be lasting all week on a single charge just yet.

Quantum Dot is promising for more than just displays

Posted by wicked September - 10 - 2015 - Thursday Comments Off

Quantum Dot technology is shaping up to be the next big step forward in LCD display technology, although we are yet to see our first smartphone implementation yet. The technology will likely be appearing in more devices over the coming years, but the science behind these new displays also boasts promising properties for other applications. We’re going to take a quick look at image sensors and spectrometry.

What is a Quantum Dot?

Before we begin, a quantum dot is a small nanocrystal made from various conducting materials, typically in the range of between 2 to 10 nanometers in diameter. They exhibit semiconductor properties and are most widely known for their ability to emit light of different colors. This was first studied by Michael Faraday back in 1857.

Properties, such as light emission, are directly linked to the size of the nanocrystal. This is useful to know when it comes to displays, cameras, and light detection technologies, as it allows for a quantum dot to be manufactured at a specific size in order to work with a very specific frequency of light. This allows for the creation of the RGB colors required for displays and could also be leveraged to create color detecting pixels for image sensors.


The size of QD nanocrystals determines the color emitted by the colloidal. Source.

New use cases

Although displays may have grabbed much of the early attention with Quantum Dots, the technology is also highly suitable for a variety of sensor implementations. By configuring Quantum Dots as absorptive filters with specific bandpass ranges, it is possible to use them to detect specific wavelengths of light, turning the use case from a display to a light sensor.

As a Quantum Dot produces one color from a light source of a higher power, it essentially acts a filter. In other words, blue light can activate a red QD as it has more power, but a red light could not activate a blue QD. Using a series of known filters and light detecting sensors, it is possible to figure out what frequency of light is pointed at the sensor. Examples have already been prototyped into an image sensor using 195 different broadband QD filters.


Don’t be fooled by the marketing terms. The LG G4’s Quantum Display does not make use of Quantum Dot technology.

Quantum Dot filters can be finely tuned across a huge range of wavelengths, from deep violet to near-infrared wavelengths, which would be useful as a spectrometer. While spectrometers are already in use technology, the complex and large nature of the components makes them expensive. Quantum Dot based image sensors could be produced in smaller form factors and at much lower costs, enabling new products for consumer and industrial applications.

Light spectrum colors

Using Quantum Dots as filters can provide information about objects. Source.

Currently, spectrometers are used in biomedical research, forensic science and chemical detection fields. An infrared spectrometer can be used to analyse the elements of a compound though molecular vibrations, while ultraviolet light can be used to detect electronic excitations. The visible spectrum in between applies to what we can see with our own eyes, and spectrometers can detect these levels very accurately too.

Quantum Dot image sensors could lead to compact consumer products. These could include portable medical or self-diagnostic tools to help diagnose skin conditions, analyse urine samples or track pulse and oxygen levels.

Development could also increase access to information for more seemingly mundane tasks, such as evaluating fabric or paint samples in a store to see how well they match up with other colors in your home.

samsung galaxy s6 edge vs lg g4 aa (20 of 28)See also: QuantumFilm image sensors explained33739

Jie Bao, a former MIT postdoc and currently a Visiting Associate in Physics at Caltech, suggests that colloidal quantum dot materials can be applied to a sensor array using a variety of techniques, including ink jet printing or direct printing, which would be quite cost effective. Furthermore, implementation in consumer electronics may not even require the 195 dots already proposed for such a sensor. A reasonable system may be able to get away with a dozen or so dots spread throughout the spectrum, to provide enough information and accuracy for most consumer applications.

If such a Quantum Dot image sensor can be manufactured at a reasonable cost, in the future we may see high-resolution image sensors powering a range of spectrometers found in industrial, scientific and consumer grade products. The technology is suitable for much more than just displays, and TVs are just the beginning.

Sony developing 1,000 fps image sensor for intelligent computer sensing

Posted by wicked September - 7 - 2015 - Monday Comments Off

Sony XMOR RS Sensor Xmor G Lens Close up Image Sensor-3

Sony is the market leader in the image sensor business and the company is looking to maintain a significant lead over its competitors with new technologies. One of the latest is Sony’s research into an affordable 1,000 fps capable image sensor, which is being developed in conjunction with Nissan Motor Co. and Masatoshi Ishikawa, a Tokyo University professor.

The new 1000 fps sensor has been developed by stacking the circuit and sensor parts for faster speeds and a high resolution, rather than placing the components side-by-side. Sony has been able to reach speeds over 900 fps with some prototypes, while your typical modern smartphone camera sensor might be capable of slow motion video capture at 120 frames per second.

However, this isn’t really a fair comparison as these fast image sensors aren’t necessary for capturing the perfect picture or home video, but they do open up development of new technologies that make use of intelligent computer sensing. The work being conducted with Nissan could enable driverless vehicles that can quickly detect and avoid hazards, or be put to use to develop faster industrial manufacturing methods.

“The images for sensing require a different kind of chip, and the challenge is converting technologies that make beautiful photos to new uses.” – Shinichi Yoshimura, Sony

High speed image sensors can also play an important role in lowering the cost of advanced gesture recognition systems. Such technologies at an affordable price point could find use in a wide range of consumer and industrial applications, including wearable gadgets and other mobile products.

“High-speed image sensors are a niche industry, but Sony has the power to take it mainstream … And that may be just two years away.” – Masatoshi Ishikawa, University of Tokyo

1,000 fps image sensors already exist but are hugely expensive and relatively large, which prohibits their widespread use. These type of sensors cost anywhere from $1,000 to $100,000 from companies including Sony and Vision Research Inc. By adapting its existing mobile image sensor technology, Sony should be able to produce competitive chips at a fraction of previous sizes and costs.

Sony anticipates that image sensor sales could climb as much as 62 percent to 1.5 trillion yen in three years. However, the company also expects that its rivals will catch up with its mobile image sensor technology, so finding new markets will be key in order to stay ahead. Sony is apparently investing €1.5B ($1.7B) in its image sensor operating in FY16, five times the amount that it invested in FY15.

This new sensor technology may help Sony diversify its sensors into new markets and could result in some exciting new products for us consumers. Definitely something to key an eye on.

QuantumFilm image sensors explained

Posted by wicked August - 20 - 2015 - Thursday Comments Off
samsung galaxy s6 edge vs lg g4 aa (20 of 28)

Smartphone cameras have come a long way recently, with sensor and lens setups in some of this year’s flagships offering up some seriously good looking results, just look at the Galaxy S6 or the LG G4 for example. However, even the best smartphone cameras still suffer from limited versatility, often have poor low light performance, and heavy noise and crosstalk when compared with higher end sensors found in DSLR cameras.

Furthermore, the resolution race has seen increasingly high-resolution cameras in smartphones, but our testing and experience has shown us that the cameras with the most pixels aren’t necessarily producing the best results. That being said, HTC’s attempt to buck this trend with its Ultrapixel technology failed to produce superior results either. The fact of the matter is that sensors, and therefore pixel sizes, in smartphones are limited by their compact size.

samsung galaxy s6 edge vs lg g4 aa (20 of 28)See also: LG G4 vs Samsung Galaxy S6 / S6 Edge – Camera Shootout3415140

InVisage, a fabless semiconductor company, is planning to bring its unique QuantumFilm technology to market, which might provide a big leap forward in image quality for small form factor mobile devices.

The problem

The crux of the issue is down to the compromises made with module size and light capture. For a little background, modern CMOS image sensors are built up of lots of sensor/pixel cells, each configured with a filter to detect how much red, green, or blue light is in the scene and in which locations. But these sensors aren’t perfect, there is a certain amount of reflection and loss as light enters a sensor and there can also be cross talk between adjacent cells and electronic interference, which manifests as noise and color artifacts.


Low light pictures often expose the weaknesses of a sensor.

These problems are more pronounced in compact smartphone sensors, as the cells are smaller and packed in closer together. Further increasing the resolution of a sensor compounds these problems, leading to more noise and worse performance in low light conditions.

The image sensor industry has come up with a number of innovations to help combat these problems. Moving over from frontside to backside illumination sensors helped reduce loss as the light reached the base of the cell, while Samsung’s Isocell aimed to better insulate nearby cells from each other, resulting in less crosstalk. These are fine solutions, but don’t completely eliminate the aforementioned probems.

QuantumFilm’s solution

QuantumFilm image sensor section

InVisage’s QuantumFilm technology aims to address these problems by tweaking traditional sensor designs to make use of its own light sensing layer, which promises to capture more light and avoid crosstalk. Much of the design remains the same as today’s CMOS sensors, instead it is the QuantumFilm layer that is of particular interest.

Rather than using silicon photodiodes, InVisage’s sensors use their own metal-chalcogenide quantum-dot film to capture much more light near the surface of the sensor. This film is built from quantum dots, a small nanocrystal with quantum mechanical properties, arranged in a colloid, a solution made up of evenly distributed small particles.

CMOS BSI vs QuantumFilm

This layer is connected in between the usual filter layer and the electrode circuitry. When a certain color of light reacts with the QuantumFilm layer, the circuit can detect the region in which this reaction occurred to determine the pixel’s color. This way, the camera’s resolution does not affect the amount of light captured in the way that traditional CMOS sensors do and there’s apparently less crosstalk than solutions which require larger photosensitive cells. In other words, the resolution of the filter layer and density of the detecting circuitry determines the resolution, while the film layer remains unchanged.

The video below offers a pretty comprehensive explanation of what the company wants to achieve, without the techno-babble.

This whole idea seems rather well suited to smartphones, where compact hardware is essential. QuantumFilm has a few benefits in this regard, as it can be produced at very thin sizes, cutting up to 0.8mm off the very smallest CMOS sensors, which is a small, but valuable space saving inside a smartphone.

QuantumFilm absorption strength

Furthermore, QuantumFilm boasts a light absorption capacity up to eight times greater than some silicon CMOS sensors, allowing for greater dynamic range and better low light shots, less noise, and it can also be used for infrared light detection, opening the door for new and interesting compact product ideas.

How soon?

Like many other up and coming pieces of technology, the big problem with QuantumFilm is that it remains untested in real world consumer products. There has been a lot of talk for a number of years, but nothing for us to really sink our teeth into.

As a small company, InVisage is currently only producing a small number of wafers, but is looking to ramp up production in the second half of this year. TSMC will be helping InVisage further increase production with additional capacity next year.

We are still probably in for quite a wait until the first smartphones appear sporting the technology, but QuantumFilm is certainly something to keep an eye on.

Google and MIT researchers demo their photo reflection removal algorithm

Posted by wicked August - 5 - 2015 - Wednesday Comments Off

Smartphone camera lens close up ShutterStock

I’m sure we have all witnessed those pesky reflections while trying to grab a photograph through a window, but those days may soon be behind us, thanks to research conducted by Google and MIT. The group presented a paper at Siggraph 2015 and has published a video demonstrating its algorithm for removing reflections from your pictures.

The software isn’t just good for reflections though, it can also be used to analyse and remove other obstacles from your pictures, such as raindrops on the glass and even a chain-link fence that partially obstructs your view. It’s not 100 percent perfect, but seems to do a pretty good job at mostly removing these annoyances in a wide range of scenarios, include tough low-light scenes.

MIT and Google reflection removal 2

The developers state that the algorithm works using a short video clip that could, for example, be capture from your phone. At this stage, the algorithm sorts out the depth of the scene using edge detection differences in the successive frames and can figure out any obstructions in the foreground. A somewhat similar idea is used for techniques like post processing depth of field adjustments and 3D parallax images, which rely on multiple points of view.

From here, the software can fill in the obstructed space with information from other frames, resulting in a clearer final picture. One creepy “side effect” of the technology is that it can also quite accurately recreate a clear image of whatever is contained within a reflection or occlusion.

The video below has a really detailed explanation about how this is accomplished and a few more examples, which is well worth a watch if you’re keen on details.

This type of technique has been tried before, but previous result have been rather mixed. Google and MIT’s implementation seems the best so far. Unfortunately we don’t know if or when this type of technology will become available for smartphone cameras. Here’s hoping that someone picks up the idea and brings it to consumers.

A look at PowerVR’s Ray Tracing GR6500 GPU

Posted by wicked July - 24 - 2015 - Friday Comments Off

Imagination-CI20 (3)

Last week, Imagination Technologies announced that its GR6500 ray tracing GPU taped out on a test chip, a major milestone on its way into a mobile product. The GR6500 is unique, as this is Imagination’s first ray tracing GPU based on its latest PowerVR Wizard architecture. A series of articles released this week explain exactly what’s behind this technology, so let’s delve into the key points.

Ray tracing, for those unfamiliar with the term, is a method of modelling lighting in a virtual 3D space, which aims to closely mimic the actual physics of light. The method is in the name, the technique “traces” the path of light rays through the 3D space to simulate the effects of its encounters with virtual objects and collects this data for the pixels displayed on screen. It can produce highly realistic looking lighting, shadows, reflection and refraction effects, and is sometimes used in 3D animated movies.

As you can probably imagine, there can be a ton of different light sources to calculate using this method, and figuring them all about is extremely computationally and memory expensive, so games developers opt for cheaper simulations like rasterized rendering. However, you can severely cut down on ray tracing processing time by using dedicated hardware, which is what Imagination Technologies has done with its PowerVR Wizard GPU.

PowerVR GR6500

The GR6500 features a dedicated Ray Tracing Unit (RTU), which calculates and keeps track of all the data. As for what the RTU actually does, it first creates a database representation of the 3D space and tracks where the rays intersect with the geometry.

“We approached the problem differently. While others in the industry were focused on solving ray tracing using GPU compute, we came up with a new approach leveraging on our prior expertise in rasterized graphics”– Luke Peterson, Imagination’s director of research for PowerVR Ray Tracing

Ray Tracing graphics exampleWhen running the shader for each pixel, the RTU searches the databased to find the closest intersecting triangle in order to figure out the color of the pixel. This process can also cast additional rays to simulate reflective properties, which in turn will affect the color of other pixels. Keeping track of the secondary rays is hugely important too and it’s all kept in the RTU to improve performance.

Ok, so what does this actually mean in terms of performance, graphics and games?

Ultimately, reaching closer to photorealistic graphics is the aim of the game, but this can take a number of forms, from accurate reflections to lighting and shadows. Compared with GPU compute or software based ray tracing approaches, the use of dedicated hardware makes the GR6500 up to 100 times more efficient. Hence why traditional GPUs depend on different approaches. This huge reduction in processing costs opens up new avenues for optimized ray tracing based graphics effects in mobile titles.

Imagination Technologies gives an example comparison of ray traced vs traditional cascaded shadow maps. You can read all about the technical details in the official blog post, but the short of it is that the ray tracing and penumbra simulation method produces much more accurate shadows than the rougher approximation technique of cascaded shadow maps. This is essentially because of the way ray tracing simulates light passages accurately regardless of the distance, while shadow mapping is limited to a more finite resolution and distance scaling to maintain performance.



Furthermore, using the hardware based technique reduces memory traffic compared with cascaded shadows. In one test, a single scene used up 233MB of memory for cascaded shadows compared with 164MB for ray tracing. Subtract the “G Buffer” setup cost of the scene and ray tracing can result in a 50 percent reduction in memory traffic. Given that memory bandwidth is a limiting factor in mobile GPUs, especially when compared with desktop GPUs, this reduction can give quite a nice boost to performance as well.

In terms of frame time, Imagination Technologies’ example shows an average reduction of close to 50 percent. So not only do ray traced shadows look better, but they can also be implemented with a higher frame rate than cascaded shadows, thanks to the use of dedicated hardware.


There is one point worth noting though and that is that it’s up to developers to implement these type of effects in their games. With only a small selection of compatible hardware heading to the market any time soon, we probably won’t see the benefits for a while yet.

However, someone has to take the first step, and Imagination Technologies GR6500 GPU may be the starting point for some much more visually impressive mobile graphics a little way down the line.

WiFi Aware enables instant local device communication

Posted by wicked July - 15 - 2015 - Wednesday Comments Off

WiFi coffee shutterstock ShutterStock

WiFi may empower us to do clever things but the technology itself isn’t that smart. It doesn’t communicate anything meaningful until a full connection has been established, which means that it can’t tell us useful pieces of information about the service that we want to connect to. However, the newly unveiled WiFi Aware specification aims to change this.

WiFi Aware enables certified products to discover and communicate with other nearby devices without relying on an internet connection, sort of like Bluetooth Low Energy or Qualcomm’s LTE Direct. Devices will continually broadcast small packets of information, which could allow applications to push notifications to other devices or provide information to a user about a nearby service, person or business, all before making a regular WiFi connection.

WiFi Aware is part of the growing trend towards smaller hubs connected up to the larger Internet of Things. This could power all sorts of simple conveniences, such as turning on you lights when you’re in range of your home WiFi or finding a nearby shop which stocks your favorite products.

A key part of the idea is contextual data. We don’t want to be bombarded with all of the information from variously nearby networks. Instead, users will have control over the type of data they are alerted too and the data that push to other devices, a filter if you will. WiFi Aware devices know about everything in close proximity, but only connects to relevant sources of information.

Privacy and the impact on battery life are sensible concerns, but Edgar Figueroa, President of the WiFi Alliance, says that WiFi Aware is very power efficiency and consumes less energy than traditional WiFi. As for privacy, apps that use the service will have opt in/out settings, and the lack of an instant Internet connection offers some extra protection.

The first wave of WiFi Aware gadgets and applications not here just yet, but Broadcom, Intel, Marvel and Realtek already have certified chips for future gadgets. Social networks could roll out applications with Wi-Fi Aware before the end of the year.

samsung galaxy s6 16

There are enough people out there that think Quad HD on a smartphone is ridiculous and completely unnecessary, and we’re guessing they won’t be too happy to hear that Samsung is already thinking beyond Quad HD. Way beyond.

According to Korea’s Electronic Times, Samsung has challenged itself to build display of an unprecedented resolution: 11k, for a pixel density of 2,250 pixels per inch. Samsung has not offered exact specifications, but throwing the numbers in a DPI Calculator shows that achieving 2,250 ppi on a mobile (5.5-inch) display would require a resolution of approximately 11,000 by 6000 pixels.

That’s absolutely amazing, given that today’s best smartphone displays offer Quad HD (2560×1440, 530ppi on a 5.5-inch screen), while the next big step is 4K (3840×2160, 800 ppi on a 5.5-inch).

The project was announced by Samsung Display executive Chu Hye Yong during a workshop in Korea. Samsung is teaming up with 13 Korean and foreign companies for this moonshot and is enlisting the help of the Korean government.

“We are hoping that we are able to show such technologies at Pyeongchang Olympics if there is a progress in developing technologies. Although some might think that 11K as ‘over specification’ that consumers do not need, this can work as a basis for Korean display industry take another leap if related materials and parts improve through this,” said Chu.

Don’t expect the project to bear fruit anytime soon. The goal is to show a working prototype of the new display by the 2018 Pyeongchang Winter Olympics.

Samsung Galaxy S6 Edge-35

Many believe that even the Galaxy S6 Edge’s Quad HD screen is overkill…

Okay, but why?

Now, for the key question – why would Samsung want to develop such an extremely dense display? It’s for 3D. When you have so many pixels to work with, you can create 3D effects without the need of special glasses or other cumbersome techniques.

But it’s the rise of VR that could really push display manufacturers towards new limits of pixel density. When you strap a display a few inches from your eyes, you can’t have too many pixels per inch. The Oculus Rift, due to launch next year, offers a resolution of 2160 x 1200, and, even if you can notice the pixelation, the experience can be amazing. Now imagine what you can do with ten times as many pixels.

If there’s any entity in the world that is able to create a display that is three times as dense as 4K, it’s Samsung. Of course, processors and batteries will need to keep up. For now, Full HD remains the standard spec, Quad HD appears in some of the nicer phones out there, and 4K displays are probably in the labs, waiting for their place in the spotlight. For more info on 4K, the manufactures that are working on it, and its effects on the industry, check out our comprehensive look at the present and future of 4K technology.

ePad 7 Android 4.0 ICS
$162 $100 FREE SHIPPING 

10" Android 4.2 1.2GHz Tablet PC