Hardware

Windows intelligently defrags your SSD

There has been a LOT of confusion around Windows, SSDs (hard drives), and whether or not they are getting automatically defragmented by automatic maintenance tasks in Windows.

There's a general rule of thumb or statement that "defragging an SSD is always a bad idea." I think we can agree we've all heard this before. We've all been told that SSDs don't last forever and when they die, they just poof and die. SSDs can only handle a finite number of writes before things start going bad. This is of course true of regular spinning rust hard drives, but the conventional wisdom around SSDs is to avoid writes that are perceived as unnecessary.

Does Windows really defrag your SSD?

I've seen statements around the web like this:

I just noticed that the defragsvc is hammering the internal disk on my machine.  To my understanding defrag provides no value add on an SSD and so is disabled by default when the installer determines the disk is SSD.  I was thinking it could be TRIM working, but I thought that was internal to the SSD and so the OS wouldn’t even see the IO.

One of the most popular blog posts on the topic of defrag and SSDs under Windows is by Vadim Sterkin. Vadim's analysis has a lot going on. He can see that defrag is doing something, but it's not clear why, how, or for how long. What's the real story? Something is clearly running, but what is it doing and why?

I made some inquiries internally, got what I thought was a definitive answer and waded in with a comment. However, my comment, while declarative, was wrong.

Windows doesn’t defrag SSDs. Full stop. If it reports as an SSD it doesn’t get defraged, no matter what. This is just a no-op message. There’s no bug here, sorry. - Me in the Past

I dug deeper and talked to developers on the Windows storage team and this post is written in conjunction with them to answer the question, once and for all

"What's the deal with SSDs, Windows and Defrag, and more importantly, is Windows doing the RIGHT THING?"

It turns out that the answer is more nuanced than just yes or no, as is common with technical questions.

The short answer is, yes, Windows does sometimes defragment SSDs, yes, it's important to intelligently and appropriately defrag SSDs, and yes, Windows is smart about how it treats your SSD.

The long answer is this.

Actually Scott and Vadim are both wrong. Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.

As far as Retrim is concerned, this command should run on the schedule specified in the dfrgui UI. Retrim is necessary because of the way TRIM is processed in the file systems. Due to the varying performance of hardware responding to TRIM, TRIM is processed asynchronously by the file system. When a file is deleted or space is otherwise freed, the file system queues the trim request to be processed. To limit the peek resource usage this queue may only grow to a maximum number of trim requests. If the queue is of max size, incoming TRIM requests may be dropped. This is okay because we will periodically come through and do a Retrim with Storage Optimizer. The Retrim is done at a granularity that should avoid hitting the maximum TRIM request queue size where TRIMs are dropped.

Wow, that's awesome and dense. Let's tease it apart a little.

When he says volume snapshots or "volsnap" he means the Volume Shadow Copy system in Windows. This is used and enabled by Windows System Restore when it takes a snapshot of your system and saves it so you can rollback to a previous system state. I used this just yesterday when I install a bad driver. A bit of advanced info here - Defrag will only run on your SSD if volsnap is turned on, and volsnap is turned on by System Restore as one needs the other. You could turn off System Restore if you want, but that turns off a pretty important safety net for Windows.

One developer added this comment, which I think is right on.

I think the major misconception is that most people have a very outdated model of disk\file layout, and how SSDs work.

First, yes, your SSD will get intelligently defragmented once a month. Fragmentation, while less of a performance problem on SSDs vs traditional hard drives is still a problem. SSDS *do* get fragmented.

It's also worth pointing out that what we (old-timers) think about as "defrag.exe" as a UI is really "optimize your storage" now. It was defrag in the past and now it's a larger disk health automated system.

Used under CC. Photo by Simon WüllhorstAdditionally, there is a maximum level of fragmentation that the file system can handle. Fragmentation has long been considered as primarily a performance issue with traditional hard drives. When a disk gets fragmented, a singular file can exist in pieces in different locations on a physical drive. That physical drive then needs to seek around collecting pieces of the file and that takes extra time.

This kind of fragmentation still happens on SSDs, even though their performance characteristics are very different. The file systems metadata keeps track of fragments and can only keep track of so many. Defragmentation in cases like this is not only useful, but absolutely needed.

SSDs also have the concept of TRIM. While TRIM (retrim) is a separate concept from fragmentation, it is still handled by the Windows Storage Optimizer subsystem and the schedule is managed by the same UI from the User's perspective. TRIM is a way for SSDs to mark data blocks as being not in use. Writing to empty blocks on an SSD is faster that writing to blocks in use as those need to be erased before writing to them again. SSDs internally work very differently from traditional hard drives and don't usually know what sectors are in use and what is free space. Deleting something means marking it as not in use. TRIM lets the operating system notify the SSD that a page is no longer in use and this hint gives the SSD more information which results in fewer writes, and theoretically longer operating life. 

In the old days, you would sometimes be told by power users to run this at the command line to see if TRIM was enabled for your SSD. A zero result indicates it is.

fsutil behavior query DisableDeleteNotify

However, this stuff is handled by Windows today in 2014, and you can trust that it's "doing the right thing." Windows 7, along with 8 and 8.1 come with appropriate and intelligent defaults and you don't need to change them for optimal disk performance. This is also true with Server SKUs like Windows Server 2008R2 and later.

Conclusion

No, Windows is not foolishly or blindly running a defrag on your SSD every night, and no, Windows defrag isn't shortening the life of your SSD unnecessarily. Modern SSDs don't work the same way that we are used to with traditional hard drives.

Yes, your SSD's file system sometimes needs a kind of defragmentation and that's handled by Windows, monthly by default, when appropriate. The intent is to maximize performance and a long life. If you disable defragmentation completely, you are taking a risk that your filesystem metadata could reach maximum fragmentation and get you potentially in trouble.

Source: Hanselman

Backblaze updates hard drive reliability tests

At Backblaze we now have 34,881 drives and store over 100 petabytes of data. We continually track how our disk drives are doing, which ones are reliable, and which ones need to be replaced.

I did a blog post back in January, called “What Hard Drive Should I Buy?” It covered the reliability of each of the drive models that we use. This month I’m updating those numbers and sharing some surprising new findings.

Reliability of Hard Drive Brands

Losing a disk drive at Backblaze is not a big deal. Every file we back up is replicated across multiple drives in the data center. When a drive fails, it is promptly replaced, and its data is restored. Even so, we still try to avoid failing drives, because replacing them costs money.

We carefully track which drives are doing well and which are not, to help us when selecting new drives to buy.

The good news is that the chart today looks a lot like the one from January, and that most of the drives are continuing to perform well. It’s nice when things are stable.

The surprising (and bad) news is that Seagate 3.0TB drives are failing a lot more, with their failure rate jumping from 9% to 15%. The Western Digital 3TB drives have also failed more, with their rate going up from 4% to 7%.

In the chart below, the grey bars are the failure rates up through the end of 2013, and the colored bars are the failure rates including all of the data up through the end of June, 2014.

 

Hard Drive Failure Rates by Model

 

You can see that all the HGST (formerly Hitachi) drives, the Seagate 1.5 TB and 4.0 TB, and Western Digital 1.0 TB drives are all continuing to perform as well as they were before. But the Seagate and Western Digital 3.0 TB drives failure rates are up quite a bit.

What is the likely cause of this?

It may be that those drives are less well-suited to the data center environment. Or it could be that getting them by drive farming and removing them from external USB enclosures caused problems. We’ll continue to monitor and report on how these drives perform in the future.

Should we switch to enterprise drives?

Assuming we continue to see a failure rate of 15% on these drives, would it make sense to switch to “enterprise” drives instead?

There are two answers to this question:

  1. Today on Amazon, a Seagate 3 TB “enterprise” drive costs $235 versus a Seagate 3 TB “desktop” drive costs $102. Most of the drives we get have a 3-year warranty, making failures a non-issue from a cost perspective for that period. However, even if there were no warranty, a 15% annual failure rate on the consumer “desktop” drive and a 0% failure rate on the “enterprise” drive, the breakeven would be 10 years, which is longer than we expect to even run the drives for.
  2. The assumption that “enterprise” drives would work better than “consumer” drives has not been true in our tests. I analyzed both of these types of drives in our system and found that their failure rates in our environment were very similar — with the “consumer” drives actually being slightly more reliable.

Detailed Reliability of Hard Drive Models

This table shows the detailed breakdown of how many of which drives we have, how old they are on average, and what the failure rate is. It includes all drive models that we have at least 200 of. A couple of models are new to Backblaze and show a failure rate of “n/a” because there isn’t enough data yet for reliable numbers.

Number of Hard Drives by Model at Backblaze
Model Size Number
of Drives
Average Age
in years
Annual Failure Rate
Seagate Desktop HDD.15
(ST4000DM000)
4.0TB 9619 0.6 3.0%
HGST Deskstar 7K2000
(HGST HDS722020ALA330)
2.0TB 4706 3.4 1.1%
HGST Deskstar 5K3000
(HGST HDS5C3030ALA630)
3.0TB 4593 2.1 0.7%
Seagate Barracuda 7200.14
(ST3000DM001)
3.0TB 3846 1.9 15.7%
HGST Megascale 4000.B
(HGST HMS5C4040BLE640)
4.0TB 2884 0.2 n/a
HGST Deskstar 5K4000
(HGST HDS5C4040ALE630)
4.0TB 2627 1.2 1.2%
Seagate Barracuda LP
(ST31500541AS)
1.5TB 1699 4.3 9.6%
HGST Megascale 4000
(HGST HMS5C4040ALE640)
4.0TB 1305 0.1 n/a
HGST Deskstar 7K3000
(HGST HDS723030ALA640)
3.0TB 1022 2.6 1.4%
Western Digital Red
(WDC WD30EFRX)
3.0TB 776 0.5 8.8%
Western Digital Caviar Green
(WDC WD10EADS)
1.0TB 476 4.6 3.8%
Seagate Barracuda 7200.11
(ST31500341AS)
1.5TB 365 4.3 24.9%
Seagate Barracuda XT
(ST33000651AS)
3.0TB 318 2.2 6.7%

We use two different models of Seagate 3TB drives. The Barracuda 7200.14 is having problems, but the Barracuda XT is doing well with less than half the failure rate.

There is a similar pattern with the Seagate 1.5TB drives. The Barracuda 7200.11 is having problems, but the Barracuda LP is doing well.

Summary

While the failure rate of Seagate and Western Digital 3 TB hard drives has started to rise, most of the consumer-grade drives in the Backblaze data center are continuing to perform well, and are a cost-effective way to provide unlimited online backup at a good price.

Notes

9-30-2014 – We were nicely asked by the folks at HGST to replace the name Hitachi with the name HGST given that HGST is no longer an Hitachi company. To that end we have changed Hitachi to HGST in this post and in the graph.

Source: BackBlaze

Seagate debuts 8TB hard drive

When it comes to technology, it is almost impossible to stay on the forefront. You will drive yourself nuts, and empty your wallet, chasing after every new thing. Got the newest and most expensive graphics card? Yesterday's news within months. The newest iPhone? You can make that claim for one year at best.

Hard drives are no different and are probably the longest-running way for manufacturers to take money from nerds. I bought a 4TB drive earlier in the year thinking it would be high-end for some time, but sure enough, it is now yawn-worthy. Why? Today, Seagate begins shipping 8TB hard drives. Yup, twice as big as my 4TB drive. I haven't learned my lesson though as I already want one!

"A cornerstone for growing capacities in multiple applications, the 8TB hard drive delivers bulk data storage solutions for online content storage providing customers with the highest capacity density needed to address an ever increasing amount of unstructured data in an industry-standard 3.5-inch HDD. Providing up to 8TB in a single drive slot, the drive delivers maximum rack density, within an existing footprint, for the most efficient data center floor space usage possible", says Seagate.

The manufacturer further explains, "the 8TB hard disk drive increases system capacity using fewer components for increased system and staffing efficiencies while lowering power costs. With its low operating power consumption, the drive reliably conserves energy thereby reducing overall operating costs. Helping customers economically store data, it boasts the best Watts/GB for enterprise bulk data storage in the industry".

In other words, you can free up SATA connectors and lower energy costs by utilizing one drive instead of multiple. Think of it this way; I already own a 4TB drive. If I add a second 4TB drive instead of replacing the first with an 8TB variant, I will be wasting a SATA port and using more electricity. For a home user, this isn't a huge deal, but in a server environment, it can really add up. Over time, the savings could justify the cost.

Cost is the big mystery though, as Seagate has not announced an MSRP. However, it is shipping a limited supply of the drives to select retailers and will open it up to more later in the year. Expect them to be expensive, at least for the time being. Hopefully they will work in existing USB enclosures, so laptop and Surface users can enjoy the fun too.

Do you need 8TB of storage on your home computer? What are you storing? Tell me in the comments.

Read more...

New Middleware Technology Quadruples SSD Speed

A Japanese research team developed a technology to drastically improve the writing speed, power efficiency and cycling capability (product life) of a storage device based on NAND flash memory (SSD).

The team is led by Ken Takeuchi, professor at the Department of Electrical, Electronic and Communication Engineering, Faculty of Science and Engineering of Chuo University. The development was announced at 2014 IEEE International Memory Workshop (IMW), an international academic conference on semiconductor memory technologies, which took place from May 18 to 21, 2014, in Taipei. The title of the thesis is "NAND Flash Aware Data Management System for High-Speed SSDs by Garbage Collection Overhead Suppression."

With NAND flash memory, it is not possible to overwrite data on the same memory area, making it necessary to write data on a different area and, then, invalidate the old area. As a result, data is fragmented, increasing invalid area and decreasing storage capacity. Therefore, NAND flash memories carry out "garbage collection," which rearranges fragmented data in a continuous way and erases blocks of invalid area. This process takes 100ms or longer, drastically decreasing the writing speed of SSD.

In September 2013, to address this issue, the research team developed a method to prevent data fragmentation by making improvements to middleware that controls a storage for database applications. It makes (1) the "SE (storage engine)" middleware, which assigns logical addresses when an application software accesses a storage device, and (2) the FTL (flash translation layer) middleware, which converts logical addresses into physical addresses on the side of the SSD controller, work in conjunction. This time, the team developed a more versatile method that can be used for a wider variety of applications.

The new method forms a middleware layer called "LBA (logical block address) scrambler" between the file system (OS) and FTL. The LBA scrambler works in conjunction with the FTL and converts the logical addresses of data being written to reduce the effect of fragmentation.

Specifically, instead of writing data on a new blank page, data is written on a fragmented page located in the block to be erased next. As a result, the ratio of invalid pages in the block to be erased increases, reducing the number of valid pages that need to be copied to another area at the time of garbage collection.

In a simulation, the research team confirmed that the new technology improves the writing speed of SSD by up to 300% and reduces power consumption by up to 60% and the number of write/erase cycles by up to 55%, increasing product life. Because, with the new method, it is not necessary to make any changes to NAND flash memory, and the method is completed within the middleware, it can be applied to existing SSDs as it is.

Source: Nikkei Technology

Hard drive reliability study names names

In November, online backup provider Backblaze published some interesting statistics on hard drive mortality based on over 25,000 units in active service. It found that failure rates were higher in the first 18 months and after three years. Those conclusions matched the findings of other studies on the subject, but frustratingly, they didn't include information on specific makes and models.

Today, Backblaze is naming names.

The firm has posted details on failure rates for 15 different consumer-grade hard drives, and the numbers don't look good for Seagate. See for yourself:

And that doesn't even tell the whole story. In Backblaze's storage pods, Seagate's Barracuda 1.5TB has an annual failure rate of over 25%. The 5,400-RPM version of that drive fares better—its failure rate is only 10%—but that's still pretty high compared to the competition. The failure rate of similar Hitachi drives in the same environment is less than 2%.

Only 10% of the hard drives in Backblaze's storage pods come from WD, and they're strictly low-power Green and Red models. The annual failure rates are pretty low, though: only 3-4%. Backblaze's purchasing decisions are largely driven by price, which is probably why fewer WD drives are in the mix. They tend to be a little pricier.

Interestingly, two drives proved to be so unreliable in Backblaze's storage pods that they were left out of the totals completely. Seagate's Barracuda LP 2TB and WD's Green 3TB "start accumulating errors as soon as they are put into production," the company says. It thinks vibration might be part of the problem. Other Barracuda LP and Green models seem unfazed, though.

Here's a look at survival rates over time:

After three years, only about three quarters of the Seagate drives remain. A surprising number of those failures come between 18 and 24 months, which contradicts the overall trend noted in Backblaze's initial study. Infant mortality seems to be a bigger problem for the WD drives, while the Hitachis fail at a steady but slow rate.

Backblaze says the Seagate drives are also more prone to dropping out of RAID arrays prematurely. The company uses consumer-grade drives that aren't designed explicitly for RAID environments, of course, but that doesn't seem to bother the Hitachis. They spend just 0.01% of their time in so-called "trouble" states, compared to 0.17% for the WD drives and 0.28% for the Seagates.

Overall, Backblaze's data suggests that Seagate drives are less reliable than their peers. That matches my own experiences with a much smaller sample size, and it may influence our future recommendations in the System Guide. Hmm. In the meantime, kudos to Backblaze for not only collecting this data, but also publishing a detailed breakdown.

Source: Tech Report

Alienware Steam Machine not an easy upgrade

PC manufacturer Alienware has clarified an earlier report that its upcoming Steam Machine cannot be upgraded by users.

steambox

Owners will not be locked out of modding, the company has now explained, but making alterations to the hardware will not be "easy".

"Enabling customers the opportunity to upgrade components has been a core tenet for Alienware since the company was founded, and that remains true today," Alienware boss Frank Azor explained to Eurogamer in a statement today.

"The Alienware Steam Machine, announced at CES, is designed to deliver a great gaming experience in the living room and we will enable customers to upgrade components. Considering we've purposefully designed the Alienware Steam Machine to be smaller than the latest generation consoles, upgrading the internal components will not be as easy as compared to other platforms, such as the Alienware X51, but we will not prevent a customer from upgrading."

Azor's comments take a different tack than his position earlier this week when he claimed that, for Alienware's Steam Machine, "there will be no customisation options, you can't really update it."

Nevertheless, Azor reiterated his earlier comment that users who are interested in heavily modding their hardware would be better off buying one of the company's more conventional models.

"If a customer is interested in modding and upgrading their rig on a regular basis, then we recommend the Alienware X51," he said. "Enabling easy upgradeability was a critical design requirement for the X51. It includes features such as single screw access to all internal components, and easy-to-remove ODD, HDD, graphics, etc.

"We feel we have multiple options for customers based on their individual needs. If a gamer wants more freedom to upgrade, we have the X51. If they would prefer a smaller, more console-like system, we will offer the Alienware Steam Machine."

A Steam Machine is expected to cost around the same price as a new generation console. The range will be updated with fresh hardware every year.

Source: Eurogamer

DD4 memory not far off

It seems that DDR4 isn’t as far away as we thought . According to Crucial Memory’s promo page  it’s going to come out late 2013. There is  just one month left till the years end. So that being said, we are going to have DDR4 in our PC’s hopefully by next month.

Of course DDR4 has a different architecture, meaning we are going to need a different motherboard, we can’t just put them in our old DDR3 systems. But, is it worth upgrading to DDR4? Crucial Memory also provided a comparison chart with some specifications of what DDR4 will offer, just to show you the difference.

The DDR4 will only eat up 1.2Volts as stated by Crucial , while having twice the speed of DDR3 Memory. DDR4 will run on a base memory speed of 2133MHz while having 4GB as their minimum density. According to the chart, we can see that DDR4 is 100% faster than DDR3, requires 20% less voltage and has 300% more density than that of DDR3.

Read more...

Steam Controller

A new way to play your entire Steam library from the sofa. Join the Steam hardware beta and help us shape a new generation of gaming.

A different kind of gamepad

We set out with a singular goal: bring the Steam experience, in its entirety, into the living-room. We knew how to build the user interface, we knew how to build a machine, and even an operating system. But that still left input — our biggest missing link. We realized early on that our goals required a new kind of input technology — one that could bridge the gap from the desk to the living room without compromises. So we spent a year experimenting with new approaches to input and we now believe we’ve arrived at something worth sharing and testing with you.

Complete catalog

The Steam Controller is designed to work with all the games on Steam: past, present, and future. Even the older titles in the catalog and the ones which were not built with controller support. (We’ve fooled those older games into thinking they’re being played with a keyboard and mouse, but we’ve designed a gamepad that’s nothing like either one of those devices.) We think you’ll agree that we’re onto something with the Steam Controller, and now we want your help with the design process.

Superior performance

Traditional gamepads force us to accept compromises. We’ve made it a goal to improve upon the resolution and fidelity of input that’s possible with those devices. The Steam controller offers a new and, we believe, vastly superior control scheme, all while enabling you to play from the comfort of your sofa. Built with high-precision input technologies and focused on low-latency performance, the Steam controller is just what the living-room ordered.

controller

Dual trackpads

The most prominent elements of the Steam controller are its two circular trackpads. Driven by the player’s thumbs, each one has a high-resolution trackpad as its base. It is also clickable, allowing the entire surface to act as a button. The trackpads allow far higher fidelity input than has previously been possible with traditional handheld controllers. Steam gamers, who are used to the input associated with PCs, will appreciate that the Steam Controller’s resolution approaches that of a desktop mouse.

Whole genres of games that were previously only playable with a keyboard and mouse are now accessible from the sofa. RTS games. Casual, cursor-driven games. Strategy games. 4x space exploration games. A huge variety of indie games. Simulation titles. And of course, Euro Truck Simulator 2.

In addition, games like first-person shooters that are designed around precise aiming within a large visual field now benefit from the trackpads’ high resolution and absolute position control.

Haptics

Trackpads, by their nature, are less physical than thumbsticks. By themselves, they are “light touch” devices and don’t offer the kind of visceral feedback that players get from pushing joysticks around. As we investigated trackpad-based input devices, it became clear through testing that we had to find ways to add more physicality to the experience. It also became clear that “rumble”, as it has been traditionally implemented (a lopsided weight spun around a single axis), was not going to be enough. Not even close.

The Steam Controller is built around a new generation of super-precise haptic feedback, employing dual linear resonant actuators. These small, strong, weighted electro-magnets are attached to each of the dual trackpads. They are capable of delivering a wide range of force and vibration, allowing precise control over frequency, amplitude, and direction of movement.

This haptic capability provides a vital channel of information to the player - delivering in-game information about speed, boundaries, thresholds, textures, action confirmations, or any other events about which game designers want players to be aware. It is a higher-bandwidth haptic information channel than exists in any other consumer product that we know of. As a parlour trick they can even play audio waveforms and function as speakers.

Touch Screen

In the center of the controller is another touch-enabled surface, this one backed by a high-resolution screen. This surface, too, is critical to achieving the controller’s primary goal - supporting all games in the Steam catalog. The screen allows an infinite number of discrete actions to be made available to the player, without requiring an infinite number of physical buttons.

The whole screen itself is also clickable, like a large single button. So actions are not invoked by a simple touch, they instead require a click. This allows a player to touch the screen, browse available actions, and only then commit to the one they want. Players can swipe through pages of actions in games where that’s appropriate. When programmed by game developers using our API, the touch screen can work as a scrolling menu, a radial dial, provide secondary info like a map or use other custom input modes we haven’t thought of yet.

In order to avoid forcing players to divide their attention between screens, a critical feature of the Steam Controller comes from its deep integration with Steam. When a player touches the controller screen, its display is overlayed on top of the game they’re playing, allowing the player to leave their attention squarely on the action, where it belongs.

Buttons

Every button and input zone has been placed based on frequency of use, precision required and ergonomic comfort. There are a total of sixteen buttons on the Steam Controller. Half of them are accessible to the player without requiring thumbs to be lifted from the trackpads, including two on the back. All controls and buttons have been placed symmetrically, making left or right handedness switchable via a software config checkbox.

Shared configurations

In order to support the full catalog of existing Steam games (none of which were built with the Steam Controller in mind), we have built in a legacy mode that allows the controller to present itself as a keyboard and mouse. The Steam Community can use the configuration tool to create and share bindings for their favorite games. Players can choose from a list of the most popular configurations.

bindings

Openness

The Steam Controller was designed from the ground up to be hackable. Just as the Steam Community and Workshop contributors currently deliver tremendous value via additions to software products on Steam, we believe that they will meaningfully contribute to the design of the Steam Controller. We plan to make tools available that will enable users to participate in all aspects of the experience, from industrial design to electrical engineering. We can’t wait to see what you come up with.

parts

Questions!

prototypes

  • Can I use a controller if I don’t have a Steam machine?
    Yes. It’ll work very well with any version of Steam.
  • I’m a developer - how can I include support for the Steam Controller in my game?
    On the same day that our prototype controllers ship to customers later this year, the first version of our API will also be made available to game developers.
  • How will the beta controller differ from the one that’s for sale next year?
    There are a couple important differences: the first 300 or so beta units won’t include a touch screen, and they won’t be wireless. Instead, they’ll have four buttons in place of the touch screen, and they’ll require a USB cable.
  • What’s next?
    We’re done with our announcements, and we promise to switch gears now and talk specifics over here in our Steam Universe community group. Also we’ll talk soon about the design process and how we’ve arrived at our current prototype. (We’ll post detailed specs next week for our living room SteamOS prototype, too.)

We look forward to working together with you to design the future of Steam in the living room.

Follow Us

facebook Facebook
twitter Twitter

 

Add to Google



Want to link to us?
Check out our Linking Guide!