Thanks so much for the more details explanation. With your help, I'm now getting really excited about a new computer!!!!! We will VERY soon be going to Tanzania and I sure hope I bring home lots of images to process! LOL! I'm hoping that by ordering the new PC before we leave, it will be arriving soon after we return.TEMPS are a generic name for all the temporary storage locations in a PC for the OS and YOUR WORKFLOW.
They include the Windows page file, the TMP and TEMP global variables.
I choose to target my Download library there too.
Plus all the cache/temp/work-spaces used by your tools. DXO has it's cache, PS has it's "Scratch Disk", etc.. Lots of tools have their own - you need to search their setups.
All these "scratch pad" locations get a lot of rd/wr traffic that adds up in processing time if they are located on spinning media or even a SATA-connected SSD.
I drive those all to a unique SSD that is fast and if/when it dies from the abuse, I've lost nothing. That is my "TEMP" dive and is a 500GB NVMe SSD - it was a 1TB SATA SSD in my previous machine.
After a shoot, I move the camera output to the "WORK" drive -- another 500GB NVMe drive for that purpose.
I then move the finished work off of that to the HDD mirrors when I'm done processing that photo shoot.
The "WORK" SSD is new in my workflow with this machine. I used to go straight from XQD to HDD then work from the final directory there. Using the intermediate SSD is another part of my planned speed-up during my workflow as all images are now coming and going to superfast storage.
I could have just put in one SSD for WORK and TEMP usage in this case. Was a coin flip.
It will be interesting to see how long it lasts. When SSDs were a relatively new thing, conventional wisdom warned against letting your scratch and/or pagefile reside on one. Then a tech site helped dispel that notion by running a test where they basically pounded a group of drives into oblivion by performing continual writes to the point of failure. A couple of observations from the test: 1) The majority of the devices lasted far longer than anticipated and 2) the idea that a drive which can no longer be written to will fail in read-only mode is a myth. None of the failed drives could be read.I drive those all to a unique SSD that is fast and if/when it dies from the abuse, I've lost nothing. That is my "TEMP" dive and is a 500GB NVMe SSD - it was a 1TB SATA SSD in my previous machine.
As I said - in my last machine I used a 1TB SATA SSD - I Samsung 850 EVO, I believe - and it ran as TEMP for everything for 6 years. Worked fine, but don't know how healthy it actually is as I've not looked at the SMART entry for it (and it might be old enough not to support SMART)It will be interesting to see how long it lasts.
Based on Richard's wise advice, I use Macrium to make images of my C:. I also used Macrium to create a bootable CD/DVD. With guidance (cuz I don't know this stuff!) I modified the BIOS to ask me what to do if there was a bootable drive in my CD/DVD drive. I recently swapped out a 250 GB SSD for a new 500 GB SSD. I used the bootable CD/DVD, then point the software to my recent image and used that image to populate the new larger SSD. Worked like a charm........ If your "C drive fails you're got some hoops to jump beyond just restoring a backup (how do you move it the backup if your machine won't boot....) ........
Whenever I download software I always add the download file to a software library, together with a readme.txt file I add giving any serial numbers and advice to myself about installing it. That way it is easy to reinstall or add the software to another machine (subject to licensing).HOWEVER, Puget will do a clean install of Windows as part of the build. So I will have to download individually my apps....... At least I have all the serial numbers and many are registered on-line which will make it easier.
Previously, I would find Aurora choking when I was combining 5-7 raw D850 files for an HDR. All 6 cores would be working hard and the CPU would hover between 90 - 100% utilization!!!!! I'm hoping my new machine will do much better!I was able to give my machine a proper workout the other night and was very pleased.
I shot a home HS Basketball game with my D500 which gave me a large batch of pictures to develop.
I needed PRIME for the usual gym cave lights and since it was home court I could have my favorite spots that yield with little cropping and application of a custom preset so my touch-time per image was low. Thus I could crank through the images quickly and try to stack up the develop queue as deep as possible. Then I would watch how long it would take to flush the queue.
Not scientific, but gut feel is the only thing I have since the old machine is retired.
I ended dumping 131 images to the queue.
In the past the old [state of the art for it's time] I-9 would churn on the export queue, pegging all 8/16 cores/threads for well over an hour beyond the time I finished inputting edits.
I was thrilled with the results.
Although these were only 21MP raw files the machine pretty much kept up with me and I was going fast!
It finished flushing the queue just about 3 minutes behind me - fantastic!
The curious part of this was the CPU load never went about ~35% on the 3950X when it would hit the I-9 at 100%. I'll need to look at this more and maybe more of the code shifted from CPU to GPU as I'm also comparing DXO-v2 to DXO-v3. Version 3 would not install on Win7 so I'm can't compare apples to apples.
Hi BK. Long time since I've been around here. If you don't mind me asking what CPU did you decide to go for in the end. Could you post a list please .Previously, I would find Aurora choking when I was combining 5-7 raw D850 files for an HDR. All 6 cores would be working hard and the CPU would hover between 90 - 100% utilization!!!!! I'm hoping my new machine will do much better!