Slatedroid info

Everything about android tablet pc [slatedroid]

A look at PowerVR’s Ray Tracing GR6500 GPU

Posted by wicked July - 24 - 2015 - Friday Comments Off

Imagination-CI20 (3)

Last week, Imagination Technologies announced that its GR6500 ray tracing GPU taped out on a test chip, a major milestone on its way into a mobile product. The GR6500 is unique, as this is Imagination’s first ray tracing GPU based on its latest PowerVR Wizard architecture. A series of articles released this week explain exactly what’s behind this technology, so let’s delve into the key points.

Ray tracing, for those unfamiliar with the term, is a method of modelling lighting in a virtual 3D space, which aims to closely mimic the actual physics of light. The method is in the name, the technique “traces” the path of light rays through the 3D space to simulate the effects of its encounters with virtual objects and collects this data for the pixels displayed on screen. It can produce highly realistic looking lighting, shadows, reflection and refraction effects, and is sometimes used in 3D animated movies.

As you can probably imagine, there can be a ton of different light sources to calculate using this method, and figuring them all about is extremely computationally and memory expensive, so games developers opt for cheaper simulations like rasterized rendering. However, you can severely cut down on ray tracing processing time by using dedicated hardware, which is what Imagination Technologies has done with its PowerVR Wizard GPU.

PowerVR GR6500

The GR6500 features a dedicated Ray Tracing Unit (RTU), which calculates and keeps track of all the data. As for what the RTU actually does, it first creates a database representation of the 3D space and tracks where the rays intersect with the geometry.

“We approached the problem differently. While others in the industry were focused on solving ray tracing using GPU compute, we came up with a new approach leveraging on our prior expertise in rasterized graphics”– Luke Peterson, Imagination’s director of research for PowerVR Ray Tracing

Ray Tracing graphics exampleWhen running the shader for each pixel, the RTU searches the databased to find the closest intersecting triangle in order to figure out the color of the pixel. This process can also cast additional rays to simulate reflective properties, which in turn will affect the color of other pixels. Keeping track of the secondary rays is hugely important too and it’s all kept in the RTU to improve performance.

Ok, so what does this actually mean in terms of performance, graphics and games?

Ultimately, reaching closer to photorealistic graphics is the aim of the game, but this can take a number of forms, from accurate reflections to lighting and shadows. Compared with GPU compute or software based ray tracing approaches, the use of dedicated hardware makes the GR6500 up to 100 times more efficient. Hence why traditional GPUs depend on different approaches. This huge reduction in processing costs opens up new avenues for optimized ray tracing based graphics effects in mobile titles.

Imagination Technologies gives an example comparison of ray traced vs traditional cascaded shadow maps. You can read all about the technical details in the official blog post, but the short of it is that the ray tracing and penumbra simulation method produces much more accurate shadows than the rougher approximation technique of cascaded shadow maps. This is essentially because of the way ray tracing simulates light passages accurately regardless of the distance, while shadow mapping is limited to a more finite resolution and distance scaling to maintain performance.



Furthermore, using the hardware based technique reduces memory traffic compared with cascaded shadows. In one test, a single scene used up 233MB of memory for cascaded shadows compared with 164MB for ray tracing. Subtract the “G Buffer” setup cost of the scene and ray tracing can result in a 50 percent reduction in memory traffic. Given that memory bandwidth is a limiting factor in mobile GPUs, especially when compared with desktop GPUs, this reduction can give quite a nice boost to performance as well.

In terms of frame time, Imagination Technologies’ example shows an average reduction of close to 50 percent. So not only do ray traced shadows look better, but they can also be implemented with a higher frame rate than cascaded shadows, thanks to the use of dedicated hardware.


There is one point worth noting though and that is that it’s up to developers to implement these type of effects in their games. With only a small selection of compatible hardware heading to the market any time soon, we probably won’t see the benefits for a while yet.

However, someone has to take the first step, and Imagination Technologies GR6500 GPU may be the starting point for some much more visually impressive mobile graphics a little way down the line.

WiFi Aware enables instant local device communication

Posted by wicked July - 15 - 2015 - Wednesday Comments Off

WiFi coffee shutterstock ShutterStock

WiFi may empower us to do clever things but the technology itself isn’t that smart. It doesn’t communicate anything meaningful until a full connection has been established, which means that it can’t tell us useful pieces of information about the service that we want to connect to. However, the newly unveiled WiFi Aware specification aims to change this.

WiFi Aware enables certified products to discover and communicate with other nearby devices without relying on an internet connection, sort of like Bluetooth Low Energy or Qualcomm’s LTE Direct. Devices will continually broadcast small packets of information, which could allow applications to push notifications to other devices or provide information to a user about a nearby service, person or business, all before making a regular WiFi connection.

WiFi Aware is part of the growing trend towards smaller hubs connected up to the larger Internet of Things. This could power all sorts of simple conveniences, such as turning on you lights when you’re in range of your home WiFi or finding a nearby shop which stocks your favorite products.

A key part of the idea is contextual data. We don’t want to be bombarded with all of the information from variously nearby networks. Instead, users will have control over the type of data they are alerted too and the data that push to other devices, a filter if you will. WiFi Aware devices know about everything in close proximity, but only connects to relevant sources of information.

Privacy and the impact on battery life are sensible concerns, but Edgar Figueroa, President of the WiFi Alliance, says that WiFi Aware is very power efficiency and consumes less energy than traditional WiFi. As for privacy, apps that use the service will have opt in/out settings, and the lack of an instant Internet connection offers some extra protection.

The first wave of WiFi Aware gadgets and applications not here just yet, but Broadcom, Intel, Marvel and Realtek already have certified chips for future gadgets. Social networks could roll out applications with Wi-Fi Aware before the end of the year.

samsung galaxy s6 16

There are enough people out there that think Quad HD on a smartphone is ridiculous and completely unnecessary, and we’re guessing they won’t be too happy to hear that Samsung is already thinking beyond Quad HD. Way beyond.

According to Korea’s Electronic Times, Samsung has challenged itself to build display of an unprecedented resolution: 11k, for a pixel density of 2,250 pixels per inch. Samsung has not offered exact specifications, but throwing the numbers in a DPI Calculator shows that achieving 2,250 ppi on a mobile (5.5-inch) display would require a resolution of approximately 11,000 by 6000 pixels.

That’s absolutely amazing, given that today’s best smartphone displays offer Quad HD (2560×1440, 530ppi on a 5.5-inch screen), while the next big step is 4K (3840×2160, 800 ppi on a 5.5-inch).

The project was announced by Samsung Display executive Chu Hye Yong during a workshop in Korea. Samsung is teaming up with 13 Korean and foreign companies for this moonshot and is enlisting the help of the Korean government.

“We are hoping that we are able to show such technologies at Pyeongchang Olympics if there is a progress in developing technologies. Although some might think that 11K as ‘over specification’ that consumers do not need, this can work as a basis for Korean display industry take another leap if related materials and parts improve through this,” said Chu.

Don’t expect the project to bear fruit anytime soon. The goal is to show a working prototype of the new display by the 2018 Pyeongchang Winter Olympics.

Samsung Galaxy S6 Edge-35

Many believe that even the Galaxy S6 Edge’s Quad HD screen is overkill…

Okay, but why?

Now, for the key question – why would Samsung want to develop such an extremely dense display? It’s for 3D. When you have so many pixels to work with, you can create 3D effects without the need of special glasses or other cumbersome techniques.

But it’s the rise of VR that could really push display manufacturers towards new limits of pixel density. When you strap a display a few inches from your eyes, you can’t have too many pixels per inch. The Oculus Rift, due to launch next year, offers a resolution of 2160 x 1200, and, even if you can notice the pixelation, the experience can be amazing. Now imagine what you can do with ten times as many pixels.

If there’s any entity in the world that is able to create a display that is three times as dense as 4K, it’s Samsung. Of course, processors and batteries will need to keep up. For now, Full HD remains the standard spec, Quad HD appears in some of the nicer phones out there, and 4K displays are probably in the labs, waiting for their place in the spotlight. For more info on 4K, the manufactures that are working on it, and its effects on the industry, check out our comprehensive look at the present and future of 4K technology.

Korean researchers can prevent flash memory decay and lengthen battery life

Posted by wicked July - 10 - 2015 - Friday Comments Off

Samsung Mass Producing ePop Memory

Wouldn’t it be great if our technology didn’t suffer from eventual slowdowns with age? Well, Hanyang University researchers have announced a new technology that could slow down the rate of decay, boost performance and increase smartphone battery life. The technology is named WALDIO, or Write Ahead Logging Direct IO.

The idea all revolves around internal NAND flash memory used for storage. Flash memory has a limited number of write/erase cycles and ages with use. Constant writing eventually results in dead sectors, which slows down the reading and writing process used by all applications. Furthermore, reading and writing to flash is an energy consuming process, so optimizing these type of tasks could increase smartphone battery life.

“This tech will make it possible to use low-priced flash memory for a long time, like expensive flash memory.”  – Professor Won You-jip

The problem, as the researchers see it, is related to the SQLite database management system and a number of unnecessary writes to storage within the Android IO stack, which degrades flash memory faster than necessary. The researchers want to dispense with the expensive file system journaling, without compromising the file integrity.

Without the jargon, WALDIO simply records smaller amounts of data to flash memory in order to preserve its lifespan.

WALDIO aims to optimize SQLite IO using block pre-allocation with explicit journaling, header embedding and group synchronisation. You can read all about it in greater depth in the published PDF, but it’s certainly not light reading.

WALDIO performance

Testing using a Samsung Galaxy S5 revealed a significant reduction in the total IO volume when performing a number of operations. Volume was reduced to around 1/6 of the original size. The test also demonstrated up to 4.6 times faster command performance over the default methods, freeing up time to do other things with the memory. In terms of what this means for us users, our smartphones could operate up to 20 times faster at certain tasks and battery life could be extended by 39 percent or more.

Unfortunately, we don’t yet know if WALDIO could be implemented on existing devices or even if the technology will ever make its way into a commercially available device. The team will be presenting their study at the Usenix Annual Technical Conference in Santa Clara today, and will hopefully grab some attention.

IBM announces the world’s first working 7nm chip

Posted by wicked July - 10 - 2015 - Friday Comments Off

IBM, in conjunction with GlobalFoundries, Samsung and SUNY, has unveiled the world’s first successful production of a 7nm FinFET chip with fully working transistors. The achievement comes as part of a $3 billion, five year research program spearheaded by IBM, which aims to push the limits of chip technology.

Today’s leading mobile processors are built on 14nm and 20nm manufacturing techniques. This 7nm breakthrough will eventually lead to smaller, faster and more energy efficient processors. However, before we go any further, it’s important to note that we are still years away from any potentially viable mass manufacturing techniques at 7nm.

Just yesterday we were talking about the ongoing race to 10nm, but to reach even smaller chip sizes we’re going to need new manufacturing techniques and new materials, plain old silicon just won’t cut it here. This is where IBM’s research comes in.

For a little background, one of the difficulties associated with smaller transistors (the electronic switches that form that basis of processors) is that the number of electrons able to squeeze through the transistor (aka the current) is also reduced, which increases the chance of errors. To combat this issue, IBM mixed some germanium into the channel, producing a silicon-germanium (SiGe) alloy with higher electron mobility, thereby improving the current flow. SiGe also helps to keep power consumption low and transistor switching at high speeds.

SUNY College of Nanoscale Science and Engineering's Michael Liehr, left, and IBM's Bala Haranand look at wafer comprised of 7nm chips on Thursday, July 2, 2015, in a NFX clean room Albany.   Several 7nm chips at SUNY Poly CNSE on Thursday in Albany.  (Darryl Bautista/Feature Photo Service for IBM)

SUNY College of Nanoscale Science and Engineering’s Michael Liehr, left, and IBM’s Bala Haranand look at wafer comprised of 7nm chips

The other half of successfully producing such small chips is actually developing manufacturing tools detailed and accurate enough to etch out your processor design on such a small scale. IBM made its chip using EUV lithography, which uses a wavelength of just 13.5nm to etch out chip features. This is substantially smaller than the 193nm wavelength of state of the art argon fluoride lasers used at 14nm.

However, EUV is still expensive and difficult to use, making its suitability for time constrained mass production questionable. The tiniest errors at this size can completely undermine production, so expensive stabilizing isolation equipment and buildings are required to protect the manufacturing tools from vibrations. Some observers are concerned about the diminishing savings associated with ever smaller processors, as difficult and more expensive manufacturing techniques eat into the cost benefits of being able to squeeze more chips onto the same silicon area.

It is still too early to say when 7nm mass production capabilities will be ready, but firms have their sights set for sometime around 2017/2018.

Researchers develop the first skin-like flexible display

Posted by wicked July - 1 - 2015 - Wednesday Comments Off

Dr Debashis Chanda, University of Central Florida

Flexible display technology has been gradually turning up in high-end gadgets, but research into even more flexible solutions is showing no signs of slowing down. A research team from the University of Central Florida, led by Professor Debashis Chanda, has developed the first-ever skin-like colour display, which is thin and flexible enough to be used alongside fabrics.

The research team’s technique could open the door to thin, flexible, full-color displays that could be built into plastics and synthetic fabrics. The technology is only a few micrometres (um) thick. That is considerably smaller than a human hair, which is typically around 0.1mm thick.

“This is a cheap way of making displays on a flexible substrate with full-color generation … That’s a unique combination.” – Dr. Chanda

The research team says that you could do all sorts of fun things with it, such as create a shirt than can change its colors and patterns on demand. If you’ve played any of the recent Metal Gear Solid games, it’s not far off from that crazy camouflage suit.

Inspiration apparently came from nature. Specifically, animals such chameleons, octopuses and squids, which can change their colors just by using their skin.

How does it work?

Traditional display components found in TVs or smartphones require a dedicated light source, be it an OLED or back-light. However, this color changing flexible display does not require a light source of its own, instead it reflects light back from its own surface.

flexible color displays

Dr. Chanda used a National Geographic photograph to demonstrate the technology.

This technique is accomplished using a thin liquid crystal layer placed over a metallic egg-carton shaped nanostructure. This shape absorbs some light wavelengths and reflects others, and the reflected light can be adjusted by passing different voltages through the liquid crystal layer, not unlike a regular LCD display.

Previous attempts at this type of display have resulted in limited colors and thicker designs. It’s the nanostructured metallic surface, as well as the interaction between liquid crystal molecules and plasmon waves, that enables the new design to offer such a wide range of colors with just a tiny footprint.

Potential products

As well as color changing clothes ranges, this type of technology could also improve on existing electronic products. TVs, laptops and smartphones could all be built with even thinner displays or new form factors. It also sounds like there’s potential for energy and cost savings too, which could help bring flexible display tech to the masses.

Not only that, but other types of wearable electronics could be born, such as fabric based wrist bands, and there’s potential for other types of displays that we probably haven’t seen yet. Of course, we’re going to need complementary developments in flexible and discrete processor, circuitry and battery technologies before any of these futuristic sounding products can end up on the market. But we’re getting there, one step at a time.

Samsung breakthrough could almost double lithium battery capacity

Posted by wicked June - 26 - 2015 - Friday Comments Off

samsung galaxy s5 gold back cover battery 2

Researchers at Samsung Electronics announced yesterday that they have developed a new technology to produce a silicon cathode material that coats Graphene onto a silicon surface for higher energy density. In other words, Samsung has found a way to almost double the capacity of lithium batteries, which are used to power smartphones and various other gadgets.

As I’m sure you’re aware, smartphone battery capacity has increased slightly over the last decade, but the technology remains limited by the physical size of gadgets and the limitations of the actual materials inside the battery. As we can’t make batteries any bigger, increases in raw capacity are needed, and researchers have been looking to new materials to find the solution.

This is where Samsung’s research comes in. The company has come up with a new coating method for battery cathodes, which overcomes the cycling performance and capacity limitations imposed by current implementations. The new process makes use of that excellent conductive material known as Graphene, which is grown directly onto the silicon coating surface without silicon carbide formation. If this sounds familiar, other groups in the US have been attempting similar ideas.

Samsung graphene growth on Si

Samsung’s researchers claim that its technique allows the full cell to reach volumetric energy densities of 972 and 700 Wh l−1 at first and 200th cycle respectively, when paired with a commercial lithium cobalt oxide cathode. This is around 1.8 and 1.5 times greater than commercialized lithium ion batteries, meaning more battery capacity for a given area. Typically, these type of designs reduce the battery’s life span due to more charge and discharge cycles, but this time the researchers also claim good cycling performance, due to its multi-layer design.

Samsung expects that its breakthrough will have important implications for both mobile devices and the electric car industry, which both really need additional battery capacity. However, like most new ideas, industry observers expect that the technology is at least two or three years away from commercialization.

Researchers charge a fitness tracker using Wi-Fi

Posted by wicked June - 8 - 2015 - Monday Comments Off


Wireless charging has been around for a while now, but current implementations are rather limited by their range. Longer distance charging is being talked up as a future technology and researchers from the University of Washington have managed to harvest power from regular Wi-Fi signals. The technology, known as PoWi-Fi, can be used to charge low-power gadgets and IoT devices.

This certainly isn’t the first time that we have heard about technology that aims to transmit power over Wi-Fi networks or that intends to harvest background waveforms for power. But this latest research seems promising, as it could be easy to deploy alongside existing home networks.

How did they do it?

PoWifi charging problemEssentially, PoWi-Fi turns a high frequency Wi-Fi waveform into usable DC power. However, charging was is possible for short bursts while the Wi-Fi transmitter is actually sending data.

To side-step this issue, the researchers modified a Wi-Fi hotspot so that it would transmit random noise instead of turning the signal off while idle. Fortunately, adding in the noise signal did little to slow data transfer rates across the Wi-Fi network.

To explain a little further, the PoWi-Fi harvester circuit makes use of a typical full-wave rectification and a “reservoir” capacitor setup found in AC-powered electronics. The circuit is taking the AC Wi-Fi signal and converting it to a DC power supply that can be used by low-power electronics and gadgets. However, if the Wi-Fi transmitter doesn’t send a signal, the capacitor begins to discharge, which would turn any connected electronic components off. Adding random noise simply keeps the capacitor charging, even when no usable data is being sent over Wi-Fi.

PoWifi harvester schematic

Considerable work has also gone into designing a “matching network” which ensures compatibility across various Wi-Fi channels. Another interesting technical point is that the Wi-Fi signal itself does not generate a high enough voltage for most batteries or low-power micro-controllers. Instead, the design uses a DC to DC converter to step the voltage up from a few hundred millivolts to 2.4V. When using a rechargeable battery, the researchers could also optimize the design further by using the battery to boost the voltage.

To ensure that the technology would work in a range of different environments, the researchers ran a number of trials in various home environments and found that the technology could continue to charge devices even in areas with busy Wi-Fi traffic.

PoWifi home networks

Enough of the technical stuff, you can read all the fine details in the published document. What this means is that PoWi-Fi can deliver smalls amounts of power wirelessly to a selection of different devices and sensors. While unlikely to be able to charge your smartphone in any realistic length of time, this technology certainly has potential benefits for powering low-power internet of things sensors and charging basic gadgets, such as wearables or cameras.

The research team demonstrated its technology powering a low-power surveillance camera and temperature sensor from up to 6 meters (20 feet) away and a camera with rechargeable battery from up to 7 meters (23 feet). PoWi-Fi was also used to charge up a Jawbone UP24 fitness tracker to 41 percent in two-and-a-half hours, which isn’t bad but isn’t practical for anything other than overnight charging.

PoWi-Fi is still in the early stages of development, so there’s plenty more testing and improvements to be done. But this is a promising start that could help make future home IoT devices cheaper and easier to power.

Graphene and printed electronics could usher in truly discreet wearables

Posted by wicked May - 21 - 2015 - Thursday Comments Off

Flexible OLED Wearables

The wearables market is in full swing, with a wide range of fitness trackers and smartwatches now on the market. However, the form factor is still perhaps not ideal, as bulky electronics have to be squeezed in behind a watch face. In the future, this type of issue could be solved by recent developments in the world of flexible electronics.

Material developments for printed electronics are being hailed as the revolution needed to make cheap, printed, flexible electronics circuit a reality. As the forefront of this research is the magical material known as graphene, which boasts exceptional electrical, mechanical and optical properties at a thickness of just one atom.

Graphene wearable electronics

You can see a small strand of the material in the bottom left of the picture, which is transferring current to the red LED.

In one of the latest developments, researchers from the University of Exeter have managed to embed transparent, flexible graphene electrodes into fibers widely used within the textile industry.

The technique allows for the transfer graphene from the copper foils to a polypropylene fibre, a material suitable for clothing. This means that electrical signals can be transferred throughout pieces of fabric without being seen by the wearer and without impacting the flexibility of the material.

This breakthrough could go a long way towards shrinking down the size of wearable electronics. Some parts of circuity could be embedded into fabric parts, such as a watchstrap, gloves, or other items of clothing.

“The possibilities for its use are endless, including textile GPS systems, to biomedical monitoring, personal security or even communication tools for those who are sensory impaired.”
Professor Monica Craciun, University of Exeter

Similarly, researchers from the University of Manchester, together with BGT Materials Limited, have managed to use graphene ink to print a radio frequency antenna, suitable for practical use in RFID tags and wireless sensors. As it’s printed, it’s entirely flexible and cheap to mass produce. Printed nano-inks and other conductive inks also have positive implications for flexible display technologies, as they can be printed at a low cost and increase the flexibility of the screen backplane over existing TFT materials.

Flexible Battery Types

Left: Samsung Gear Fit battery Center: rechargeable zn-based battery Right: ultra-thin and flexible LiPON

Of course, we’re still going to need small form factor, flexible batteries to power this type of technology. Fortunately there are selection of developments that may find use in these type of wearable applications, including rechargeable printed zinc metal oxide and lithium phosphorus oxynitride (LiPON) designs.

However, these technologies are still being worked on to fit into the size, weight, and power constraints needed for demanding consumer computer applications. Furthermore, the next real challenge with flexible battery technology will be to reduce the costs associated with some of the most promising implementations.

Flexible Battery Benchmark

These technologies have far wider reaching implications than just consumer electronics though. The medical and defense industries are both expected to benefit greatly from wearable innovations, as embedded, flexible and wearable electronics will allow for small form factor wearable computers.

We’re still a way off from the first consumer products, but we’re edging closer to a future full of discreet wearable electronics.

Samsung and Samsonite working on smart luggage that can check itself in

Posted by wicked May - 4 - 2015 - Monday Comments Off


“Smart luggage” is not an entirely new concept, but Samsung and Samsonite, a purely coincidental name, are looking to make the formula much more popular with a new range of affordable and more accessible smart bags.

The two aim to provide luggage with embedded microprocessors that can be tracked through GPS, which you may appreciate if you’ve ever lost a bag at an airport, and can offer anti-tampering alerts to warn you if someone has attempted to open your bag. Presumably, Samsung is getting involved to provide the mobile link to your luggage and will be providing much of the behind the scenes software, while Samsonite will be looking to integrate the technology into some of its existing bags or a new range.

‘Smart luggage will be able to communicate with you but it needs to be able to do much more than just give its location’ … ‘We are working with Samsung to create something that is more than a gimmick.’ – Samsonite chief executive Ramesh Tainwala

Furthermore, Samsonite is working with airlines on a self-check-in feature, which will automatically provide the airport with baggage information to ensure the correct destination and airline, and to check the bag’s weight against any limits. Talks with Emirates, Lufthansa and KLM Air France are said to be in the works.

If you’re looking for something even more futuristic, the firm is also working on a project to develop ‘self-propelling’ luggage. But it’s not really that practical yet, as the engine takes up a third of the bag’s space and weighs 20 kilos!