Hardware

Through years of dev kits, prototypes, and trade show demos of the Oculus Rift, we've been stuck guessing at just how much hardware power the eventual consumer version of the device would require. Now, with that consumer launch officially slated for early 2016, Oculus has announced what PC hardware it recommends for a quality VR experience.

According to Oculus, those recommended hardware specs are:

  • NVIDIA GTX 970 / AMD 290 equivalent or greater
  • Intel i5-4590 equivalent or greater
  • 8GB+ RAM
  • Compatible HDMI 1.3 video output
  • 2x USB 3.0 ports
  • Windows 7 SP1 or newer

That's a relatively beefy system, all things considered. A quick price check on Newegg suggests that the listed CPU, RAM, and video card would add up to just over $600. Add in a barebones tower, motherboard, and 250GB solid state hard drive, and you're looking at a nearly $900 system to run the Rift, all told. That's before you account for the (still unannounced) price of the headset itself. Upgrading from an existing gaming rig will obviously be cheaper, and component costs will come down by the Rift's early 2016 launch, but a lot of potential VR users are still going to be staring down some significant upgrade costs.

The Windows 7 requirement is also surprising, given that current Rift development kits run on Mac OS X and Linux as well. In a detailed explanation of the recommended specs, Oculus Chief Architect Atman Binstock says that "our development for OS X and Linux has been paused in order to focus on delivering a high quality consumer-level VR experience at launch across hardware, software, and content on Windows. We want to get back to development for OS X and Linux, but we don’t have a timeline."

Elsewhere in the blog post, Binstock reveals that the consumer Rift itself "runs at 2160×1200 at 90Hz split over dual displays." That translates to 233 million raw pixels per second, but the "eye target scale" for Rift scenes is an even higher 400 million shaded pixels per second, Binstock says. That's "approximately 3x the GPU power of 1080p rendering" at 60 frames per second on a normal monitor, according to Binstock, a requirement that's more serious when you take into account VR's need for a rock-solid refresh rate with no dropped frames.

Oculus' recommended hardware configuration will be able to provide a solid sense of virtual reality presence "over the lifetime of the Rift," Binstock says, and allows game developers to "optimize for a known hardware configuration, which ensures a better player experience of comfortable sustained presence." Binstock also points out that equivalent-performance hardware will come down in price over the life of the Rift, expanding the universe of PCs that can handle its virtual reality. And while "almost no current laptops have the GPU performance for the recommended spec... upcoming mobile GPUs may be able to support this level of performance."

Going forward, Binstock suggests that the launch of consumer VR "will likely drive changes in GPUs, OSs, drivers, 3D engines, and apps, ultimately enabling much more efficient low-latency VR performance." Translation: Maybe we can expect even cheaper, VR-focused PC hardware once the manufacturers catch up to our needs.

Source: ArsTechnica

There has been a LOT of confusion around Windows, SSDs (hard drives), and whether or not they are getting automatically defragmented by automatic maintenance tasks in Windows.

There's a general rule of thumb or statement that "defragging an SSD is always a bad idea." I think we can agree we've all heard this before. We've all been told that SSDs don't last forever and when they die, they just poof and die. SSDs can only handle a finite number of writes before things start going bad. This is of course true of regular spinning rust hard drives, but the conventional wisdom around SSDs is to avoid writes that are perceived as unnecessary.

Does Windows really defrag your SSD?

I've seen statements around the web like this:

I just noticed that the defragsvc is hammering the internal disk on my machine.  To my understanding defrag provides no value add on an SSD and so is disabled by default when the installer determines the disk is SSD.  I was thinking it could be TRIM working, but I thought that was internal to the SSD and so the OS wouldn’t even see the IO.

One of the most popular blog posts on the topic of defrag and SSDs under Windows is by Vadim Sterkin. Vadim's analysis has a lot going on. He can see that defrag is doing something, but it's not clear why, how, or for how long. What's the real story? Something is clearly running, but what is it doing and why?

I made some inquiries internally, got what I thought was a definitive answer and waded in with a comment. However, my comment, while declarative, was wrong.

Windows doesn’t defrag SSDs. Full stop. If it reports as an SSD it doesn’t get defraged, no matter what. This is just a no-op message. There’s no bug here, sorry. - Me in the Past

I dug deeper and talked to developers on the Windows storage team and this post is written in conjunction with them to answer the question, once and for all

"What's the deal with SSDs, Windows and Defrag, and more importantly, is Windows doing the RIGHT THING?"

It turns out that the answer is more nuanced than just yes or no, as is common with technical questions.

The short answer is, yes, Windows does sometimes defragment SSDs, yes, it's important to intelligently and appropriately defrag SSDs, and yes, Windows is smart about how it treats your SSD.

The long answer is this.

Actually Scott and Vadim are both wrong. Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.

As far as Retrim is concerned, this command should run on the schedule specified in the dfrgui UI. Retrim is necessary because of the way TRIM is processed in the file systems. Due to the varying performance of hardware responding to TRIM, TRIM is processed asynchronously by the file system. When a file is deleted or space is otherwise freed, the file system queues the trim request to be processed. To limit the peek resource usage this queue may only grow to a maximum number of trim requests. If the queue is of max size, incoming TRIM requests may be dropped. This is okay because we will periodically come through and do a Retrim with Storage Optimizer. The Retrim is done at a granularity that should avoid hitting the maximum TRIM request queue size where TRIMs are dropped.

Wow, that's awesome and dense. Let's tease it apart a little.

When he says volume snapshots or "volsnap" he means the Volume Shadow Copy system in Windows. This is used and enabled by Windows System Restore when it takes a snapshot of your system and saves it so you can rollback to a previous system state. I used this just yesterday when I install a bad driver. A bit of advanced info here - Defrag will only run on your SSD if volsnap is turned on, and volsnap is turned on by System Restore as one needs the other. You could turn off System Restore if you want, but that turns off a pretty important safety net for Windows.

One developer added this comment, which I think is right on.

I think the major misconception is that most people have a very outdated model of disk\file layout, and how SSDs work.

First, yes, your SSD will get intelligently defragmented once a month. Fragmentation, while less of a performance problem on SSDs vs traditional hard drives is still a problem. SSDS *do* get fragmented.

It's also worth pointing out that what we (old-timers) think about as "defrag.exe" as a UI is really "optimize your storage" now. It was defrag in the past and now it's a larger disk health automated system.

Used under CC. Photo by Simon WüllhorstAdditionally, there is a maximum level of fragmentation that the file system can handle. Fragmentation has long been considered as primarily a performance issue with traditional hard drives. When a disk gets fragmented, a singular file can exist in pieces in different locations on a physical drive. That physical drive then needs to seek around collecting pieces of the file and that takes extra time.

This kind of fragmentation still happens on SSDs, even though their performance characteristics are very different. The file systems metadata keeps track of fragments and can only keep track of so many. Defragmentation in cases like this is not only useful, but absolutely needed.

SSDs also have the concept of TRIM. While TRIM (retrim) is a separate concept from fragmentation, it is still handled by the Windows Storage Optimizer subsystem and the schedule is managed by the same UI from the User's perspective. TRIM is a way for SSDs to mark data blocks as being not in use. Writing to empty blocks on an SSD is faster that writing to blocks in use as those need to be erased before writing to them again. SSDs internally work very differently from traditional hard drives and don't usually know what sectors are in use and what is free space. Deleting something means marking it as not in use. TRIM lets the operating system notify the SSD that a page is no longer in use and this hint gives the SSD more information which results in fewer writes, and theoretically longer operating life. 

In the old days, you would sometimes be told by power users to run this at the command line to see if TRIM was enabled for your SSD. A zero result indicates it is.

fsutil behavior query DisableDeleteNotify

However, this stuff is handled by Windows today in 2014, and you can trust that it's "doing the right thing." Windows 7, along with 8 and 8.1 come with appropriate and intelligent defaults and you don't need to change them for optimal disk performance. This is also true with Server SKUs like Windows Server 2008R2 and later.

Conclusion

No, Windows is not foolishly or blindly running a defrag on your SSD every night, and no, Windows defrag isn't shortening the life of your SSD unnecessarily. Modern SSDs don't work the same way that we are used to with traditional hard drives.

Yes, your SSD's file system sometimes needs a kind of defragmentation and that's handled by Windows, monthly by default, when appropriate. The intent is to maximize performance and a long life. If you disable defragmentation completely, you are taking a risk that your filesystem metadata could reach maximum fragmentation and get you potentially in trouble.

Source: Hanselman

At Backblaze we now have 34,881 drives and store over 100 petabytes of data. We continually track how our disk drives are doing, which ones are reliable, and which ones need to be replaced.

I did a blog post back in January, called “What Hard Drive Should I Buy?” It covered the reliability of each of the drive models that we use. This month I’m updating those numbers and sharing some surprising new findings.

Reliability of Hard Drive Brands

Losing a disk drive at Backblaze is not a big deal. Every file we back up is replicated across multiple drives in the data center. When a drive fails, it is promptly replaced, and its data is restored. Even so, we still try to avoid failing drives, because replacing them costs money.

We carefully track which drives are doing well and which are not, to help us when selecting new drives to buy.

The good news is that the chart today looks a lot like the one from January, and that most of the drives are continuing to perform well. It’s nice when things are stable.

The surprising (and bad) news is that Seagate 3.0TB drives are failing a lot more, with their failure rate jumping from 9% to 15%. The Western Digital 3TB drives have also failed more, with their rate going up from 4% to 7%.

In the chart below, the grey bars are the failure rates up through the end of 2013, and the colored bars are the failure rates including all of the data up through the end of June, 2014.

 

Hard Drive Failure Rates by Model

 

You can see that all the HGST (formerly Hitachi) drives, the Seagate 1.5 TB and 4.0 TB, and Western Digital 1.0 TB drives are all continuing to perform as well as they were before. But the Seagate and Western Digital 3.0 TB drives failure rates are up quite a bit.

What is the likely cause of this?

It may be that those drives are less well-suited to the data center environment. Or it could be that getting them by drive farming and removing them from external USB enclosures caused problems. We’ll continue to monitor and report on how these drives perform in the future.

Should we switch to enterprise drives?

Assuming we continue to see a failure rate of 15% on these drives, would it make sense to switch to “enterprise” drives instead?

There are two answers to this question:

  1. Today on Amazon, a Seagate 3 TB “enterprise” drive costs $235 versus a Seagate 3 TB “desktop” drive costs $102. Most of the drives we get have a 3-year warranty, making failures a non-issue from a cost perspective for that period. However, even if there were no warranty, a 15% annual failure rate on the consumer “desktop” drive and a 0% failure rate on the “enterprise” drive, the breakeven would be 10 years, which is longer than we expect to even run the drives for.
  2. The assumption that “enterprise” drives would work better than “consumer” drives has not been true in our tests. I analyzed both of these types of drives in our system and found that their failure rates in our environment were very similar — with the “consumer” drives actually being slightly more reliable.

Detailed Reliability of Hard Drive Models

This table shows the detailed breakdown of how many of which drives we have, how old they are on average, and what the failure rate is. It includes all drive models that we have at least 200 of. A couple of models are new to Backblaze and show a failure rate of “n/a” because there isn’t enough data yet for reliable numbers.

Number of Hard Drives by Model at Backblaze
Model Size Number
of Drives
Average Age
in years
Annual Failure Rate
Seagate Desktop HDD.15
(ST4000DM000)
4.0TB 9619 0.6 3.0%
HGST Deskstar 7K2000
(HGST HDS722020ALA330)
2.0TB 4706 3.4 1.1%
HGST Deskstar 5K3000
(HGST HDS5C3030ALA630)
3.0TB 4593 2.1 0.7%
Seagate Barracuda 7200.14
(ST3000DM001)
3.0TB 3846 1.9 15.7%
HGST Megascale 4000.B
(HGST HMS5C4040BLE640)
4.0TB 2884 0.2 n/a
HGST Deskstar 5K4000
(HGST HDS5C4040ALE630)
4.0TB 2627 1.2 1.2%
Seagate Barracuda LP
(ST31500541AS)
1.5TB 1699 4.3 9.6%
HGST Megascale 4000
(HGST HMS5C4040ALE640)
4.0TB 1305 0.1 n/a
HGST Deskstar 7K3000
(HGST HDS723030ALA640)
3.0TB 1022 2.6 1.4%
Western Digital Red
(WDC WD30EFRX)
3.0TB 776 0.5 8.8%
Western Digital Caviar Green
(WDC WD10EADS)
1.0TB 476 4.6 3.8%
Seagate Barracuda 7200.11
(ST31500341AS)
1.5TB 365 4.3 24.9%
Seagate Barracuda XT
(ST33000651AS)
3.0TB 318 2.2 6.7%

We use two different models of Seagate 3TB drives. The Barracuda 7200.14 is having problems, but the Barracuda XT is doing well with less than half the failure rate.

There is a similar pattern with the Seagate 1.5TB drives. The Barracuda 7200.11 is having problems, but the Barracuda LP is doing well.

Summary

While the failure rate of Seagate and Western Digital 3 TB hard drives has started to rise, most of the consumer-grade drives in the Backblaze data center are continuing to perform well, and are a cost-effective way to provide unlimited online backup at a good price.

Notes

9-30-2014 – We were nicely asked by the folks at HGST to replace the name Hitachi with the name HGST given that HGST is no longer an Hitachi company. To that end we have changed Hitachi to HGST in this post and in the graph.

Source: BackBlaze

When it comes to technology, it is almost impossible to stay on the forefront. You will drive yourself nuts, and empty your wallet, chasing after every new thing. Got the newest and most expensive graphics card? Yesterday's news within months. The newest iPhone? You can make that claim for one year at best.

Hard drives are no different and are probably the longest-running way for manufacturers to take money from nerds. I bought a 4TB drive earlier in the year thinking it would be high-end for some time, but sure enough, it is now yawn-worthy. Why? Today, Seagate begins shipping 8TB hard drives. Yup, twice as big as my 4TB drive. I haven't learned my lesson though as I already want one!

"A cornerstone for growing capacities in multiple applications, the 8TB hard drive delivers bulk data storage solutions for online content storage providing customers with the highest capacity density needed to address an ever increasing amount of unstructured data in an industry-standard 3.5-inch HDD. Providing up to 8TB in a single drive slot, the drive delivers maximum rack density, within an existing footprint, for the most efficient data center floor space usage possible", says Seagate.

The manufacturer further explains, "the 8TB hard disk drive increases system capacity using fewer components for increased system and staffing efficiencies while lowering power costs. With its low operating power consumption, the drive reliably conserves energy thereby reducing overall operating costs. Helping customers economically store data, it boasts the best Watts/GB for enterprise bulk data storage in the industry".

In other words, you can free up SATA connectors and lower energy costs by utilizing one drive instead of multiple. Think of it this way; I already own a 4TB drive. If I add a second 4TB drive instead of replacing the first with an 8TB variant, I will be wasting a SATA port and using more electricity. For a home user, this isn't a huge deal, but in a server environment, it can really add up. Over time, the savings could justify the cost.

Cost is the big mystery though, as Seagate has not announced an MSRP. However, it is shipping a limited supply of the drives to select retailers and will open it up to more later in the year. Expect them to be expensive, at least for the time being. Hopefully they will work in existing USB enclosures, so laptop and Surface users can enjoy the fun too.

Do you need 8TB of storage on your home computer? What are you storing? Tell me in the comments.

A Japanese research team developed a technology to drastically improve the writing speed, power efficiency and cycling capability (product life) of a storage device based on NAND flash memory (SSD).

The team is led by Ken Takeuchi, professor at the Department of Electrical, Electronic and Communication Engineering, Faculty of Science and Engineering of Chuo University. The development was announced at 2014 IEEE International Memory Workshop (IMW), an international academic conference on semiconductor memory technologies, which took place from May 18 to 21, 2014, in Taipei. The title of the thesis is "NAND Flash Aware Data Management System for High-Speed SSDs by Garbage Collection Overhead Suppression."

With NAND flash memory, it is not possible to overwrite data on the same memory area, making it necessary to write data on a different area and, then, invalidate the old area. As a result, data is fragmented, increasing invalid area and decreasing storage capacity. Therefore, NAND flash memories carry out "garbage collection," which rearranges fragmented data in a continuous way and erases blocks of invalid area. This process takes 100ms or longer, drastically decreasing the writing speed of SSD.

In September 2013, to address this issue, the research team developed a method to prevent data fragmentation by making improvements to middleware that controls a storage for database applications. It makes (1) the "SE (storage engine)" middleware, which assigns logical addresses when an application software accesses a storage device, and (2) the FTL (flash translation layer) middleware, which converts logical addresses into physical addresses on the side of the SSD controller, work in conjunction. This time, the team developed a more versatile method that can be used for a wider variety of applications.

The new method forms a middleware layer called "LBA (logical block address) scrambler" between the file system (OS) and FTL. The LBA scrambler works in conjunction with the FTL and converts the logical addresses of data being written to reduce the effect of fragmentation.

Specifically, instead of writing data on a new blank page, data is written on a fragmented page located in the block to be erased next. As a result, the ratio of invalid pages in the block to be erased increases, reducing the number of valid pages that need to be copied to another area at the time of garbage collection.

In a simulation, the research team confirmed that the new technology improves the writing speed of SSD by up to 300% and reduces power consumption by up to 60% and the number of write/erase cycles by up to 55%, increasing product life. Because, with the new method, it is not necessary to make any changes to NAND flash memory, and the method is completed within the middleware, it can be applied to existing SSDs as it is.

Source: Nikkei Technology

In November, online backup provider Backblaze published some interesting statistics on hard drive mortality based on over 25,000 units in active service. It found that failure rates were higher in the first 18 months and after three years. Those conclusions matched the findings of other studies on the subject, but frustratingly, they didn't include information on specific makes and models.

Today, Backblaze is naming names.

The firm has posted details on failure rates for 15 different consumer-grade hard drives, and the numbers don't look good for Seagate. See for yourself:

And that doesn't even tell the whole story. In Backblaze's storage pods, Seagate's Barracuda 1.5TB has an annual failure rate of over 25%. The 5,400-RPM version of that drive fares better—its failure rate is only 10%—but that's still pretty high compared to the competition. The failure rate of similar Hitachi drives in the same environment is less than 2%.

Only 10% of the hard drives in Backblaze's storage pods come from WD, and they're strictly low-power Green and Red models. The annual failure rates are pretty low, though: only 3-4%. Backblaze's purchasing decisions are largely driven by price, which is probably why fewer WD drives are in the mix. They tend to be a little pricier.

Interestingly, two drives proved to be so unreliable in Backblaze's storage pods that they were left out of the totals completely. Seagate's Barracuda LP 2TB and WD's Green 3TB "start accumulating errors as soon as they are put into production," the company says. It thinks vibration might be part of the problem. Other Barracuda LP and Green models seem unfazed, though.

Here's a look at survival rates over time:

After three years, only about three quarters of the Seagate drives remain. A surprising number of those failures come between 18 and 24 months, which contradicts the overall trend noted in Backblaze's initial study. Infant mortality seems to be a bigger problem for the WD drives, while the Hitachis fail at a steady but slow rate.

Backblaze says the Seagate drives are also more prone to dropping out of RAID arrays prematurely. The company uses consumer-grade drives that aren't designed explicitly for RAID environments, of course, but that doesn't seem to bother the Hitachis. They spend just 0.01% of their time in so-called "trouble" states, compared to 0.17% for the WD drives and 0.28% for the Seagates.

Overall, Backblaze's data suggests that Seagate drives are less reliable than their peers. That matches my own experiences with a much smaller sample size, and it may influence our future recommendations in the System Guide. Hmm. In the meantime, kudos to Backblaze for not only collecting this data, but also publishing a detailed breakdown.

Source: Tech Report

PC manufacturer Alienware has clarified an earlier report that its upcoming Steam Machine cannot be upgraded by users.

steambox

Owners will not be locked out of modding, the company has now explained, but making alterations to the hardware will not be "easy".

"Enabling customers the opportunity to upgrade components has been a core tenet for Alienware since the company was founded, and that remains true today," Alienware boss Frank Azor explained to Eurogamer in a statement today.

"The Alienware Steam Machine, announced at CES, is designed to deliver a great gaming experience in the living room and we will enable customers to upgrade components. Considering we've purposefully designed the Alienware Steam Machine to be smaller than the latest generation consoles, upgrading the internal components will not be as easy as compared to other platforms, such as the Alienware X51, but we will not prevent a customer from upgrading."

Azor's comments take a different tack than his position earlier this week when he claimed that, for Alienware's Steam Machine, "there will be no customisation options, you can't really update it."

Nevertheless, Azor reiterated his earlier comment that users who are interested in heavily modding their hardware would be better off buying one of the company's more conventional models.

"If a customer is interested in modding and upgrading their rig on a regular basis, then we recommend the Alienware X51," he said. "Enabling easy upgradeability was a critical design requirement for the X51. It includes features such as single screw access to all internal components, and easy-to-remove ODD, HDD, graphics, etc.

"We feel we have multiple options for customers based on their individual needs. If a gamer wants more freedom to upgrade, we have the X51. If they would prefer a smaller, more console-like system, we will offer the Alienware Steam Machine."

A Steam Machine is expected to cost around the same price as a new generation console. The range will be updated with fresh hardware every year.

Source: Eurogamer

It seems that DDR4 isn’t as far away as we thought . According to Crucial Memory’s promo page  it’s going to come out late 2013. There is  just one month left till the years end. So that being said, we are going to have DDR4 in our PC’s hopefully by next month.

Of course DDR4 has a different architecture, meaning we are going to need a different motherboard, we can’t just put them in our old DDR3 systems. But, is it worth upgrading to DDR4? Crucial Memory also provided a comparison chart with some specifications of what DDR4 will offer, just to show you the difference.

The DDR4 will only eat up 1.2Volts as stated by Crucial , while having twice the speed of DDR3 Memory. DDR4 will run on a base memory speed of 2133MHz while having 4GB as their minimum density. According to the chart, we can see that DDR4 is 100% faster than DDR3, requires 20% less voltage and has 300% more density than that of DDR3.