This opportunity closes on 01/22/2026 01:30pm  (6 days from now)

Tier 3 Research Data Storage

Description

Oregon State University intends to contract for the provision of Tier 3 Research Data Storage from Croit North American Inc as a sole source because of the uniqueness (design, capability, nature) of the goods and/or services. Croit's Ceph management platform was the only supplier found to meet the following minimum requirements:

**Technical Requirements:**
- GUI-based Ceph cluster deployment and lifecycle management capable of managing 40+ petabytes per administrator
- Automated PXE boot infrastructure for diskless server deployment across 32+ Supermicro SSG-641E-E1CR36L nodes (4U chassis, 30 HDD bays, 6 NVMe slots)
- Web-based administration interface with integrated shell access to all cluster nodes, eliminating need for separate SSH key management across 4 geographically distributed datacenters
- Automated OS (RHEL9-compatible: Rocky/Alma Linux) and Ceph security updates with cluster-wide update management
- GUI-based CRUSH map management for complex multi-datacenter erasure coding configurations (8+4 across 4 DCs with 3 chunks per DC)
- Centralized logging and monitoring with pre-built Grafana dashboards for proactive health monitoring
- High-availability gateway management for NFS, SMB, S3, and iSCSI protocols with keepalived/CTDB integration
- RESTful API and hook scripts for automation and integration with OSU's IAM (Active Directory/Grouper)
- 100% pure open-source Ceph distribution (no proprietary forks) to maintain upgrade path flexibility and avoid vendor lock-in

**Operational Requirements:**
- 24x7 emergency support with documented response times (4-hour initial response for Severity 1 incidents)
- Expertise in multi-datacenter Ceph deployments with erasure coding failure domain architecture
- Remote support capability via VPN/SSH with German/European time zone coverage for overnight incidents
- Hardware-agnostic support (vendor must support commodity Supermicro hardware configurations)
- Onboarding services including initial cluster design validation, deployment assistance, and staff training
- Documented upgrade path from standard open-source Ceph (if initially deployed with 42on emergency support)

**Compliance and Security Requirements:**
- Support for NIST 800-171 technical controls implementation (encryption at rest via Ceph S3, audit logging, per-user access controls)
- VLAN-based network segmentation support for separating client, storage, and management traffic
- Audit logging capabilities for compliance documentation

**Compatibility Requirements:**
- Must support Supermicro SSG-641E-E1CR36L storage nodes with Broadcom 3808 onboard HBA, 24TB Seagate Exos HDDs, and 7.68TB Micron 7450 Pro NVMe drives
- Must integrate with Weka.io v5.0 high-performance caching layer via Ceph S3 backend tiering (Cluster B requires dedicated Ceph cluster as Weka backend)
- Must support NVIDIA ConnectX-6 Dx 25GbE networking infrastructure

An entity may appeal this determination in accordance with OSU Standard 03-010. Section 5.17 no later than the closing date indicated on the website. Appeals must be submitted to Procurement, Contracts and Materials Management at procurement@oregonstate.edu by Thursday, January 22, 2026 at 1:30 pm, PST. For additional information please contact Brian Kinsey by email at brian.kinsey@oregonstate.edu or by telephone at (541) 737-1027. 

Attachments

  • No Attachments

Details