It was during this project that I thought, hey, why not document this using the newly found methodologies that I currently improved on over a couple of projects before.
Currently I am using the following to manage this project:
Redmine: turnkeylinux.org ready out of the box, you configure to fit your needs. It has built in calendaring, gantt charting, up to date emailing of newly changed documents to everyone in your group and a variety of other options.
TreeSizeFree: To keep a hierarchy of the files in the project by date accessed
Access Time and Billing: Built in free time and billing database by MS Access, easily configurable and additions on the fly are available.
With these three apps as my issue, asset and file management tools, its easy to maintain everything, conversations, meetings, documentation, outlines, project scope of work planning and the list can go on.
For the 100Tb SAN requirement we shot out a lot of RFQ (Request For Quotes) to various vendors: Studio Network Solutions, Promise, Tiger, NexSan, Exasan, Starwind, Tiger Tech, FreeNAS, TrueNAS, SAM-SD, Quantum, Infortrend, B&H Photo, HP, Dell, Nexenta, Data-core, 45 Drives, Aberdeen, Merritek and a few others. Almost 25 vendor RFQ's
Also, in order for this newly developed SAN to be useful, the client also needed their network upgraded to 10G capabilities. We shot a few RFQ's out to the following vendors: HP, Dell, Netgear and various cabling installers to get feedback on the cheapest way to do it without sacrificing productivity.
So here goes, after several meetings, consistent research, quotes, demos, trial software and headaches from certain vendors taking forever to shoot the quote out to you, we finally got it down to three vendors products. These are the final solutions we went with:
Penguin Computing: For the complete SAN solution that included the following:
- Supermicro 36bay Chassis (24 bays in front / 12 bays in back)
- 4U, 36x 3.5" Hot Swap SAS Bay,
- 2x 2.5" Hot Swap SATA Bay w/ 2x 1280W Hot Swap PSU
- Dual Intel Xeon E5-2620 v3 CPU (6C, 2.4GHz, 85W)
- 64GB RAM, DDR4-2133 ECC, 2R (8 x 8GB)
- RAID, LSI 9361-8i 12Gbps SAS w/ CacheVault: HW RAID 0/1/5/6/10/50/60
- 4 x Drive Tray, 3.5" to 2.5" Bay
- RAID 5 Volume: 2235 GB (4 x 800GB SSD, 2.5", SAS, 12Gbps, 3 DWPD, MLC)
- RAID 6 Volume: 40054 GB (9 x 6TB HDD, 3.5", SAS, 12Gbps, 7200RPM, Nearline)
- RAID 6 Volume: 40054 GB (9 x 6TB HDD, 3.5", SAS, 12Gbps, 7200RPM, Nearline)
- Integrated NIC, Intel I350, 2x RJ-45/GbE Integrated BMC,
- Aspeed AST2400, 1x RJ-45/100MbE (Dedicated IPMI) NIC,
- Intel X520-DA2, 2x SFP+/10GbE Standard Rails 2x Power Cord, for PDU with C13 outlet (C14/C13), 1m
IQSS: Had everything that Penguin had for a few dollars less, but they were comparable so they are in the loop, their warranty offering matched Penguins as well.
Tiger Technology: For the Content management, workflow scheduling, transcoding, ingestion, replication & backup and ease of use software we went with TigerStore and TigerBox1 for the MDC (Meta Data Controller).
The deciding factor that made us reach our decision was:
- Current technology for a fraction of the cost (Other vendors were in the $80-100k range)
- Ability to expand in the future (Adding more drives or an expansion bay)
- Warranty offerings (Penguin gave a 5yr/next business day onsite for pennies)
- Act as an editing station on the first silo (RAID5 SSD)
- Provide redundancy and storage on the second silo (RAID6 SAS)
- Provide archival storage and prep files for second location within building and for offsite cloud backup to secure storage
The following vendors were then used to monitor, backup and encrypt the data on and offsite:
Veeam: backup and recovery of files, data, vm's, etc...
Spiceworks Network Monitor: to keep track of all activities happening on the network with alerts setup for text/email based responses
Spiceworks Asset Management: For monitoring all pc's, switches, routers and other devices on network that utilize an ip address based transport