Hardware

Seagate today announced a new line of hard drives with up to 10TB of capacity for desktops computers, network-attached storage (NAS) and surveillance systems.

The high-capacity drives, dubbed the Guardian Series, represent a 2TB increase over the capacity of previous Seagate hard drives in the consumer and small business category.

The Guardian series consists of the BarraCuda Pro desktop drive, the Seagate IronWolf for NAS applications and the Seagate SkyHawk for video surveillance systems.

Seagate also said it has resurrected the Barracuda brand for its line of consumer desktop and laptop hard drives, a name it did away with in favor of the "Desktop Hard Drive" brand a few years ago. Seagate changed the spelling to "BarraCuda."

The standard BarraCuda line now includes hard disk drives with spindle speeds ranging from 5,900rpm to 7,200rpm and capacities ranging from 500GB to 10TB. The drives also come with 16GB to 64GB of DRAM cache, depending on the overall capacity, and are being offered in 2.5-in. laptop form factors and 3.5-in. desktop sizes. The thinnest 2.5-in. BarraCuda drive is 7mm thick, small enough for ultrathin notebooks; it offers up to 2TB of capacity.

The updated BarraCuda drive line will offer sustained data transfer rates of up to 210MB/s. The 2TB models will retail for $81 and the 3TB models will sell for $100.

Seagate also announced a new drive for PC "enthusiasts," the BarraCuda Pro, which comes in capacities of up to 10TB. The drive has a 7,200rpm spindle speed and a data transfer rate of up to 220MB/s, and comes with a five-year limited warranty. That's more than twice the typical two-year BarraCuda HDD warranty.

"BarraCuda Pro offers the highest PC Compute spin speed at 7200 RPM for 3.5-in. HDD drives on the market," said Chris Deardorff, a Seagate senior marketing strategist.

The drive also comes with Seagate's Self-Encryption Drive (SED) technology, which password protects data on the drive but also allows users to crypto-erase it by changing the encryption key, ensuring no one can access it.

The BarraCuda Pro can sustain up to 55TB of data writes per year, according to Deardorff. The 10TB BarraCuda Pro will retail for $535.

Another hard drive announced today in the BarraCuda lineup is the FireCuda, which is aimed at gamers and comes in both 2.5-in. and 3.5-in. Form factors, and either 1TB or 2TB of capacities.

The FireCuda is a solid-state hybrid drive (SSHD), which means it uses a small amount (8GB) of NAND flash as a caching element to increase performance up to five times over standard BarraCuda drives. Data is first written to the NAND flash prior to the hard drive, which enables higher performance considering the spindle speed is just 5,900rpm. The drive has a maximum sustained read rate of 210MB/s.

Seagate has been selling SSHDs since 2011, so the FireCuda is not new technology. The FireCuda will retail for $85 for a 1TB drive, $110 for the 2TB model.

For small businesses, Seagate has refreshed its NAS drive lineup with the IronWolf brand. The IronWolf is aimed at NAS devices with one to 16 drive bays and comes with up to 10TB capacity and Seagate's AgileArray (formely NASworks) software on it. AgileArray technology supports error recovery controls, power management and vibration tolerance for reliability when used in multi-bay NAS devices.

The IronWolf, which is rated for up to 180TB of writes per year, sports a higher resiliency than other Seagate drive models with a one million meantime before failure (MTBF) rating, according to Jennifer Bradfield, a Seagate senior director of product marketing.

The drive can also power down into a sleep mode while not being used, sipping only .8 watts of power compared with the 6.8 watts of power it uses while active. 

The IronWolf HDDs offer a Rescue Data Recovery Service plan that protects against data loss from viruses, software issues, or mechanical and electrical breakdowns in a NAS or RAID environment. A failed drive can be sent back to Seagate where its in-house "Rescue Service" will attempt to retrieve data. The drive also comes with a three-year limited warranty. The IronWolf 10TB HDD will retail for $470.

Seagate's new SkyHawk HDD lineup is a rebrand of the previous Sv35 series video surveillance hard drive. The new 7,200rpm drive comes with up to 10TB of capacity for storing up to 10,000 hours of HD video. It also comes with ImagePerfect firmware from MTC Technology.

The firmware, which allows the drive to be used by motion-sensing cameras, powers down the drive when it's not in use to reduce power consumption and heat generation. It then powers up quickly to provide uninterrupted recording.

Like the IronWolf, the SkyHawk drives use rotational vibration sensors to help minimize read/write errors, and it can support up to 64 HD cameras -- more than any other drive on the market, according to Aubrey Muhlach, Seagate's Worldwide Surveillance Segment marketing manager.

Designed for modern, high-resolution systems running around the clock, SkyHawk drives also come with a data recovery services option.

The SkyHawk HDD supports up to 180TB worth of data writes per year, has a one million hour MTBF and a three-year limited warranty.

The 10TB SkyHawk HDD will retail for $460.

Source: ComputerWorld

Yes, we know. Our smartphone batteries are bad because they barely last a day.

But it’s partially our fault because we’ve been charging them wrong this whole time. 

Many of us have an ingrained notion that charging our smartphones in small bursts will cause long-term damage to their batteries, and that it’s better to charge them when they’re close to dead.

But we couldn’t be more wrong.

If fact, a site from battery company Cadex called Battery University details how the lithium-ion batteries in our smartphones are sensitive to their own versions of “stress.” And, like for humans, extended stress could be damaging your smartphone battery’s long-term lifespan. 

If you want to keep your smartphone battery in top condition and go about your day without worrying about battery life, you need to change a few things.

Don’t keep it plugged in when it’s fully charged

According to Battery University, leaving your phone plugged in when it’s fully charged, like you might overnight, is bad for the battery in the long run.

Once your smartphone has reached 100% charge, it gets “trickle charges” to keep it at 100% while plugged in. It keeps the battery in a high-stress, high-tension state, which wears down the chemistry within.

Battery University goes into a bunch of scientific detail explaining why, but it also sums it nicely: “When fully charged, remove the battery” from its charging device. “This is like relaxing the muscles after strenuous exercise.” You too would be pretty miserable if you worked out nonstop for hours and hours.

phones 3x4

Skye Gould/Tech Insider

The batteries in these phones get stressed out, too.

In fact, try not to charge it to 100%

At least when you don’t have to.

According to Battery University, “Li-ion does not need to be fully charged, nor is it desirable to do so. In fact, it is better not to fully charge, because a high voltage stresses the battery” and wears it away in the long run. 

That might seem counterintuitive if you’re trying to keep your smartphone charged all day, but just plug it in whenever you can during the day, and you’ll be fine. 

battery shot with gun bullet holes still working

24M

Don’t shoot your smartphone batteries, either.

Plug in your phone whenever you can

It turns out that the batteries in our smartphones are much happier if you charge them occasionally throughout the day instead of plugging them in for a big charging session when they’re empty.

Charging your phone when it loses 10% of its charge would be the best-case scenario, according to Battery University. Obviously, that’s not practical for most people, so just plug in your smartphone whenever you can. It’s fine to plug and unplug it multiple times a day.

Not only does this keep your smartphone’s battery performing optimally for longer, but it also keeps it topped up throughout the day. 

Plus, periodic top-ups also let you use features you might not normally use because they hog your battery life, like location-based features that use your smartphone’s GPS antenna.

 

Source: C2H 50H/YouTube

Keep it cool

Smartphone batteries are so sensitive to heat that Apple itself suggests you remove certain cases that insulate heat from your iPhone when you charge it. “If you notice that your device gets hot when you charge it, take it out of its case first.” If you’re out in the hot sun, keep your phone covered. It’ll protect your battery’s health.

Source: Business Insider

It's not hard to get a capacious solid-state drive if you're running a server farm, but everyday users still have to be picky more often than not: either you get a roomy-but-slow spinning hard drive or give up that capacity in the name of a speedy SSD. Samsung may have finally delivered a no-compromise option, however. It's introducing a 4TB version of the 850 Evo that, in many cases, could easily replace a reasonably large hard drive. While it's not the absolute fastest option (the SATA drive is capped at 540MB/s sequential reads and 520MB/s writes), it beats having to resort to a secondary hard drive just to make space for your Steam game library.

Of course, there's a catch: the price. The 4TB 850 Evo will set you back a whopping $1,500 in the US, so it's largely reserved for pros and well-heeled enthusiasts who refuse to settle for rotating storage. Suddenly, the $700 2TB model seems like a bargain. Even if the 4TB version is priced into the stratosphere, though, it's a good sign that SSDs are turning a corner in terms of viability. It might not be long before high-capacity SSDs are inexpensive enough that you won't have to make any major sacrifices to put one in your PC.

Source: EnGadget

Samsung Electronics, the world leader in advanced memory technology, announced today that it has begun mass producing the industry’s first NVMe* PCIe solid state drive (SSD) in a single ball grid array (BGA) package, for use in next-generation PCs and ultra-slim notebook PCs. The new BGA NVMe SSD, named PM971-NVMe, features an extremely compact package that contains all essential SSD components including NAND flash memory, DRAM and controller while delivering outstanding performance.

“Samsung’s new BGA NVMe SSD triples the performance of a typical SATA SSD, in the smallest form factor available, with storage capacity reaching up to 512GB,” said Jung-bae Lee, senior vice president, Memory Product Planning & Application Engineering Team, Samsung Electronics. “The introduction of this small-scale SSD will help global PC companies to make timely launches of slimmer, more stylish computing devices, while offering consumers a more satisfactory computing environment.”

Configuring the PM971-NVMe SSD in a single BGA package was enabled by combining 16 of Samsung’s 48-layer 256-gigabit (Gb) V-NAND flash chips, one 20-nanometer 4Gb LPDDR4 mobile DRAM chip and a high-performance Samsung controller. The new SSD is 20mm x 16mm x 1.5mm and weighs only about one gram (an American dime by comparison weighs 2.3 grams). The single-package SSD’s volume is approximately a hundredth of a 2.5” SSD or HDD, and its surface area is about a fifth of an M.2 SSD, allowing much more design flexibility for computing device manufacturers.

In addition, the PM971-NVMe SSD delivers a level of performance that easily surpasses the speed limit of a SATA 6Gb/s interface. It enables sequential read and write speeds of up to 1,500MB/s (megabytes per second) and 900MB/s respectively, when TurboWrite** technology is used. The performance figures can be directly compared to transferring a 5GB-equivalent, Full-HD movie in about 3 seconds or downloading it in about 6 seconds. It also boasts random read and write IOPS (input output operations per second) of up to 190K and 150K respectively, to easily handle high-speed operations. A hard drive, by contrast, will only process up to 120 IOPS in random reads, making the new Samsung SSD more than 1500 times faster than an HDD in this regard.

The PM971-NVMe SSD line-up will be available in 512GB, 256GB and 128GB storage options. Samsung will start providing the new SSDs to its customers this month worldwide.

As a leading SSD provider, Samsung has a history of introducing advanced SSDs ahead of the industry. In June 2013, Samsung introduced XP941 SSD in M.2 (mini PCI-Express 2.0) form factor (80mm x 22mm), which was also the industry’s first PCIe SSD for PCs. Now, Samsung plans to rapidly expand its market base in the next-generation premium notebook PC sector with the new high-performance, BGA package, NVMe SSD. Later this year, Samsung plans to introduce more high-capacity and ultra-fast NVMe SSDs to meet increasing customer needs for improved performance and greater density.

* Often shortened as NVMe, NVM Express (Non-Volatile Memory Express) is an optimized, high performance, scalable host controller interface with a streamlined register interface and command set designed for enterprise, datacenter and client systems that use non-volatile memory storage. For more information, please visit www.nvmexpress.org

** TurboWrite is a Samsung proprietary technology that temporarily uses certain portions of an SSD as a write buffer. TurboWrite delivers better PC experiences as users can enjoy much faster sequential write speeds.

BGA_SSD_Main_2_2

 

BGA_SSD_Main_3_2

Source: Samsung

Flash storage is too slow for your device's main memory, but RAM is expensive and volatile. Thanks to a breakthrough from IBM, phase-change memory (PCM) might one day replace them both. The crystal-based storage has been used in optical disks and other tech for at least 15 years, but the technology has been limited by the cost and storage density -- cells are either "on" or "off." However, IBM researchers have figured out how to save 3-bits of data per cell, dramatically increasing the capacity of the original tech.

To store PCM data on a Blu-ray disk, you apply a high current to amorphous (non-crystalline) glass materials, transforming them into a more conductive crystale form. To read it back, you apply a lower voltage to measure conductivity -- when it's high, the state is "1," and when it's low, it's "0." By heating up the materials, more states can be stored, but the problem is that the crystals can "drift" depending on the ambient temperature. IBM's team figured out how to track and encode those variations, allowing them to reliably read 3-bits of data per cell long after it was written.

That suddenly makes PCM a lot more interesting -- its speed is currently much better than flash, but the costs are as high as RAM thanks to the low density. "Reaching 3 bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash," says Dr. Haris Pozidis from IBM Research. With the discovery, phase-change memory could become feasible for more than just optical disks.

For instance, PCM memory could be used in conjunction with flash to create an extremely fast cache for a cell phone. "A mobile phone's operating system could be stored in PCM, enabling the phone to launch in a few seconds," according to the researchers. It could also replace regular SSDs for time-critical applications, because PCM memory can read data in less than 1 microsecond, compared to 70 microseconds for flash. RAM is still much faster, of course, but in certain applications, PCM could work as "universal" storage and replace both RAM and flash memory.

The research still needs to be developed commercially, and has to compete with other promising tech like memristors and resistive RAM. IBM thinks that it's more than feasible, however, and sees it as the perfect storage medium for Watson-like artificial intelligence apps.

Source: Engadget

Samsung has just announced a 256GB microSD card, raising the bar for storage on the format. SanDisk recently released a 200GB microSD card  — that up until now was the highest capacity microSD card  — back in March, but it'll have to settle for second place for now.

The EVO Plus 256GB microSD card has read and write speeds of 95MB/s and 90MB/s respectively, and can store up to 55,200 photos, 12 hours of 4K video, 33 hours of full HD video, or 23,500 songs. Samsung says the card will come with a 10-year limited warranty and will be available in over 50 countries beginning in June for $249.99.

Source: The Verge

Through years of dev kits, prototypes, and trade show demos of the Oculus Rift, we've been stuck guessing at just how much hardware power the eventual consumer version of the device would require. Now, with that consumer launch officially slated for early 2016, Oculus has announced what PC hardware it recommends for a quality VR experience.

According to Oculus, those recommended hardware specs are:

  • NVIDIA GTX 970 / AMD 290 equivalent or greater
  • Intel i5-4590 equivalent or greater
  • 8GB+ RAM
  • Compatible HDMI 1.3 video output
  • 2x USB 3.0 ports
  • Windows 7 SP1 or newer

That's a relatively beefy system, all things considered. A quick price check on Newegg suggests that the listed CPU, RAM, and video card would add up to just over $600. Add in a barebones tower, motherboard, and 250GB solid state hard drive, and you're looking at a nearly $900 system to run the Rift, all told. That's before you account for the (still unannounced) price of the headset itself. Upgrading from an existing gaming rig will obviously be cheaper, and component costs will come down by the Rift's early 2016 launch, but a lot of potential VR users are still going to be staring down some significant upgrade costs.

The Windows 7 requirement is also surprising, given that current Rift development kits run on Mac OS X and Linux as well. In a detailed explanation of the recommended specs, Oculus Chief Architect Atman Binstock says that "our development for OS X and Linux has been paused in order to focus on delivering a high quality consumer-level VR experience at launch across hardware, software, and content on Windows. We want to get back to development for OS X and Linux, but we don’t have a timeline."

Elsewhere in the blog post, Binstock reveals that the consumer Rift itself "runs at 2160×1200 at 90Hz split over dual displays." That translates to 233 million raw pixels per second, but the "eye target scale" for Rift scenes is an even higher 400 million shaded pixels per second, Binstock says. That's "approximately 3x the GPU power of 1080p rendering" at 60 frames per second on a normal monitor, according to Binstock, a requirement that's more serious when you take into account VR's need for a rock-solid refresh rate with no dropped frames.

Oculus' recommended hardware configuration will be able to provide a solid sense of virtual reality presence "over the lifetime of the Rift," Binstock says, and allows game developers to "optimize for a known hardware configuration, which ensures a better player experience of comfortable sustained presence." Binstock also points out that equivalent-performance hardware will come down in price over the life of the Rift, expanding the universe of PCs that can handle its virtual reality. And while "almost no current laptops have the GPU performance for the recommended spec... upcoming mobile GPUs may be able to support this level of performance."

Going forward, Binstock suggests that the launch of consumer VR "will likely drive changes in GPUs, OSs, drivers, 3D engines, and apps, ultimately enabling much more efficient low-latency VR performance." Translation: Maybe we can expect even cheaper, VR-focused PC hardware once the manufacturers catch up to our needs.

Source: ArsTechnica

There has been a LOT of confusion around Windows, SSDs (hard drives), and whether or not they are getting automatically defragmented by automatic maintenance tasks in Windows.

There's a general rule of thumb or statement that "defragging an SSD is always a bad idea." I think we can agree we've all heard this before. We've all been told that SSDs don't last forever and when they die, they just poof and die. SSDs can only handle a finite number of writes before things start going bad. This is of course true of regular spinning rust hard drives, but the conventional wisdom around SSDs is to avoid writes that are perceived as unnecessary.

Does Windows really defrag your SSD?

I've seen statements around the web like this:

I just noticed that the defragsvc is hammering the internal disk on my machine.  To my understanding defrag provides no value add on an SSD and so is disabled by default when the installer determines the disk is SSD.  I was thinking it could be TRIM working, but I thought that was internal to the SSD and so the OS wouldn’t even see the IO.

One of the most popular blog posts on the topic of defrag and SSDs under Windows is by Vadim Sterkin. Vadim's analysis has a lot going on. He can see that defrag is doing something, but it's not clear why, how, or for how long. What's the real story? Something is clearly running, but what is it doing and why?

I made some inquiries internally, got what I thought was a definitive answer and waded in with a comment. However, my comment, while declarative, was wrong.

Windows doesn’t defrag SSDs. Full stop. If it reports as an SSD it doesn’t get defraged, no matter what. This is just a no-op message. There’s no bug here, sorry. - Me in the Past

I dug deeper and talked to developers on the Windows storage team and this post is written in conjunction with them to answer the question, once and for all

"What's the deal with SSDs, Windows and Defrag, and more importantly, is Windows doing the RIGHT THING?"

It turns out that the answer is more nuanced than just yes or no, as is common with technical questions.

The short answer is, yes, Windows does sometimes defragment SSDs, yes, it's important to intelligently and appropriately defrag SSDs, and yes, Windows is smart about how it treats your SSD.

The long answer is this.

Actually Scott and Vadim are both wrong. Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.

As far as Retrim is concerned, this command should run on the schedule specified in the dfrgui UI. Retrim is necessary because of the way TRIM is processed in the file systems. Due to the varying performance of hardware responding to TRIM, TRIM is processed asynchronously by the file system. When a file is deleted or space is otherwise freed, the file system queues the trim request to be processed. To limit the peek resource usage this queue may only grow to a maximum number of trim requests. If the queue is of max size, incoming TRIM requests may be dropped. This is okay because we will periodically come through and do a Retrim with Storage Optimizer. The Retrim is done at a granularity that should avoid hitting the maximum TRIM request queue size where TRIMs are dropped.

Wow, that's awesome and dense. Let's tease it apart a little.

When he says volume snapshots or "volsnap" he means the Volume Shadow Copy system in Windows. This is used and enabled by Windows System Restore when it takes a snapshot of your system and saves it so you can rollback to a previous system state. I used this just yesterday when I install a bad driver. A bit of advanced info here - Defrag will only run on your SSD if volsnap is turned on, and volsnap is turned on by System Restore as one needs the other. You could turn off System Restore if you want, but that turns off a pretty important safety net for Windows.

One developer added this comment, which I think is right on.

I think the major misconception is that most people have a very outdated model of disk\file layout, and how SSDs work.

First, yes, your SSD will get intelligently defragmented once a month. Fragmentation, while less of a performance problem on SSDs vs traditional hard drives is still a problem. SSDS *do* get fragmented.

It's also worth pointing out that what we (old-timers) think about as "defrag.exe" as a UI is really "optimize your storage" now. It was defrag in the past and now it's a larger disk health automated system.

Used under CC. Photo by Simon WüllhorstAdditionally, there is a maximum level of fragmentation that the file system can handle. Fragmentation has long been considered as primarily a performance issue with traditional hard drives. When a disk gets fragmented, a singular file can exist in pieces in different locations on a physical drive. That physical drive then needs to seek around collecting pieces of the file and that takes extra time.

This kind of fragmentation still happens on SSDs, even though their performance characteristics are very different. The file systems metadata keeps track of fragments and can only keep track of so many. Defragmentation in cases like this is not only useful, but absolutely needed.

SSDs also have the concept of TRIM. While TRIM (retrim) is a separate concept from fragmentation, it is still handled by the Windows Storage Optimizer subsystem and the schedule is managed by the same UI from the User's perspective. TRIM is a way for SSDs to mark data blocks as being not in use. Writing to empty blocks on an SSD is faster that writing to blocks in use as those need to be erased before writing to them again. SSDs internally work very differently from traditional hard drives and don't usually know what sectors are in use and what is free space. Deleting something means marking it as not in use. TRIM lets the operating system notify the SSD that a page is no longer in use and this hint gives the SSD more information which results in fewer writes, and theoretically longer operating life. 

In the old days, you would sometimes be told by power users to run this at the command line to see if TRIM was enabled for your SSD. A zero result indicates it is.

fsutil behavior query DisableDeleteNotify

However, this stuff is handled by Windows today in 2014, and you can trust that it's "doing the right thing." Windows 7, along with 8 and 8.1 come with appropriate and intelligent defaults and you don't need to change them for optimal disk performance. This is also true with Server SKUs like Windows Server 2008R2 and later.

Conclusion

No, Windows is not foolishly or blindly running a defrag on your SSD every night, and no, Windows defrag isn't shortening the life of your SSD unnecessarily. Modern SSDs don't work the same way that we are used to with traditional hard drives.

Yes, your SSD's file system sometimes needs a kind of defragmentation and that's handled by Windows, monthly by default, when appropriate. The intent is to maximize performance and a long life. If you disable defragmentation completely, you are taking a risk that your filesystem metadata could reach maximum fragmentation and get you potentially in trouble.

Source: Hanselman

Follow Us

facebook Facebook
twitter Twitter

 



Want to link to us?
Check out our Linking Guide!