Saturday, December 22, 2012

SSD optimisation part 4: atime and journaling

 Noatime


The reasoning for the noatime mount option being recommended for SSDs is that every time a file is accessed, the last accessed time is written to disk, hence turning it off will remove a lot of writes. Most distros use relatime by default, negating some of the writes though, so it is not an absolute must:

"Relatime maintains atime data, but not for each time that a file is accessed. With this option enabled, atime data is written to the disk only if the file has been modified since the atime data was last updated (mtime), or if the file was last accessed more than a certain length of time ago (by default, one day)."
- Redhat documentation

If you want to use it, add noatime to /etc/fstab:



Disable Journaling


"A journaling file system is a file system that keeps track of the changes that will be made in a journal (usually a circular log in a dedicated area of the file system) before committing them to the main file system. In the event of a system crash or power failure, such file systems are quicker to bring back online and less likely to become corrupted."
-Wikipedia

According to kernel developer and ext4 maintainer Ted Ts'o there is very little to be gained by disabling Journaling. On top of that there's also the risk of data loss. I would highly recommend you leave it on. I wont even write how to turn it off....

Cons
Risk of data loss.

Pros
A tiny bit of performance gain?

My recommendation
Keep it at the default setting (on).

This post was written for the Linux Mint forum. Please ask any questions there (:

SSD optimisation part 3: Enabling TRIM

"TRIM was introduced soon after SSDs started to become an affordable alternative to traditional hard disks. Because low-level operation of SSDs differs significantly from mechanical hard disks, the typical way in which operating systems handle operations like deletes and formats (not explicitly communicating the involved sectors/pages to the underlying storage medium) resulted in unanticipated progressive performance degradation of write operations on SSDs. TRIM enables the SSD to handle garbage collection overhead, which would otherwise significantly slow down future write operations to the involved blocks, in advance."
- Wikipedia
As you can see, TRIM is a very handy feature, but not all SSDs support it. Please refer to your SSDs documentation to check whether it supports TRIM or try this method posted by Linux Mint Forum Moderator Vincent Vermeulen:

"Next up, you need to find out if your SSD supports TRIM. Just copy & paste the following command string to the Terminal and execute it. When asked type your password (nothing will seem to happen as you type, this is normal):

 

Did it list your SSD? Then it supports TRIM, if it didn't show anything your SSD doesn't support TRIM.

Cons
If your drive supports it, none. If it don't you might get a read-only drive or even data corruption.

Pros
Longer life-time and faster drive.

My recommendation
Enable it if your drive supports it.

"discard  - Controls whether ext4 should issue discard/TRIM commands to the underlying block device when blocks are freed."
- Ext4 documentation



This post was written for the Linux Mint forum. Please ask any questions there (:

SSD optimisation part 2: Large commit interval (Ext4)

"Ext4 can be told to sync all its data and metadata every 'nrsec' seconds. The default value is 5 seconds. This means that if you lose your power, you will lose as much as the latest 5 seconds of work (your filesystem will not be damaged though, thanks to the journaling).  This default value (or any low value) will hurt performance, but it's good for data-safety. Setting it to 0 will have the same effect as leaving it at the default (5 seconds). Setting it to very large values will improve performance."
- Ext4 documentation
Let's look at a simplified example:

A file at 2MB in size is created, used in one minute and then deleted (a photo in an online photo editor put in the browser cache perhaps?). In the one minute it is changed two times. That would be a maximum of 6MB written to the SSD, if for simplicity's sake we assume the whole file is changed. With the default commit interval at 5 seconds all 6MB would be written to disk with 3 writes @ 2MB. With an interval at, for example 2 minutes, nothing will be written at all as everything stays in cache. That is 3 writes / 6MB Vs. 0 writes / 0MB. Of course this imply that the file is not implicitly saved by the application or some other operation that calls fsync, but I did write "simplified example"..

Cons
If your system crashes with a commit at 600 (I.E. 10 minutes) you might loose all the changes you have made in the last 10 minutes. Most of the time this wont happen though as software can still call fsync() and get its data written to disk, overriding the commit setting. You could look at it as "write everything to disk at least this often".

Pros
Less commits to disk equals less writes and better performance.

My recommendation
The default (5) is a good all around value. If you never have crashes or power-outs, you could go as high as you feel like (years even, but you better not use software that does not call fsync then). If you really want me to throw a number out there, how about 200 seconds, like so:




This post was written for the Linux Mint forum. Please ask any questions there (:

SSD optimisation part 1: Temporary File System (tmpfs)


tmpfs is a file system that stores all files in virtual memory. Everything in tmpfs is temporary in the sense that no files will be created on the hard drive, but resides completely in RAM. This can be very beneficial in some cases. Having files in RAM is a lot faster than on a disk and no writes are made to the SSD at all, saving a lot of wear and tear. Most guides will recommend you make use of tmpfs and it does make sense. There are some cons to consider though, like that everything is lost at reboot. This is a good thing, if you use tmpfs for /tmp as this is the way it should work, but for other uses it could cause problems.

Cons
Sometimes /tmp is used as a working directory for some pretty big files. It could be backup software or DVD authoring tools. If your tmpfs /tmp is using the default settings (50 % of your RAM), then to burn a DVD (4,7GB) with such tools you need to have at least twice that amount available. That is a lot of RAM, but if you have it available, great! If not, maybe you can change the software's working directory or mount a normal /tmp partition.

Debian developers tried to use tmpfs for /tmp as default but this caused some bugs and problems. Here are a few of the complaints from a quick google search: One, two, threefour.

I did a quick test to see what would happen on my system if I ran out of /tmp space with a simple copy to /tmp. The copy operation exited with an error and strange things started to happen. For example TAB completion in the terminal didn't work and gave a disk full error. If the system end up with no RAM left, it could kill or crash the application, lose the files in /tmp (that might still be needed until reboot) or even crash the whole system. This risk of course rise when less RAM is available, as when some is used to host a tmpfs /tmp.

Pros
No files on the SSD means no wear at all. RAM is also a lot faster than even the fastest disk drive. Files in /tmp are meant to be deleted anyway, hence the name, so why not keep them away from the drive completely if possible?

My recommendation
Only use it if you know you have enough RAM. Limit it in size (for example "size=2G" or "size="50%") according to your total available RAM and keep an eye on it in the beginning ("df -h /tmp"). You might also want to look into “ramfs”. If you point other directories to a tmpfs /tmp (like browser cache) be careful that the correct permissions are set.





Hint
If you want Firefox cache in RAM, but don't want to use tmpfs, you could do this instead according to Mozillazine.org:
  • Open up about:config in firefox
  • Set browser.cache.disk.enable to "false"
  • Set browser.cache.memory.enable to "true"
  • Set browser.cache.memory.max_entry_size to the amount of KB you want to use or use -1 for automatic cache size



In Opera you change the cache directory like so:
  • Open opera:config
  • Search for "Cache Directory4"
  • Change it to wherever you like

This post was written for the Linux Mint forum. Please ask any questions there (:

Thursday, June 10, 2010

Got a log full of "ACPI Error Method parse/execution failed" errors?

Well, I certainly did!



Actually I have seen the error on two of my machines and a lot of posts about it on the net. Here is how I got rid of the error spam:


First have a look at what clock options is available on your computer as that seems to be where the problem is:


I had hpet, acpi_pm, and tsc on one machine and only hpet and acpi_pm on the other. To stop the messege in the logs a option need to be added to the kernel. First try if it works out and then add it permanently. To try it out reboot and when you see the list of kernels press "c" to edit the kernel you want to boot with. Add clock=acpi_pm or one of the other options you saw earlier.

If everything works out and the error is gone then add it permanently:
Edit "/etc/defaults/grub" and add the kernel option you tested to all kernels:


The reason to do this instead of editing a single kernel is that then at every kernel update you have to re-add it.

Save and exit. Then update grub:




Hope it helps someone getting their logs back to normal ;-)