We are moving full-speed towards Utrecht at this point - Register today and submit your Talks!
The Storage Tiering capability has been refactored and will be released very soon with updated capacity for admin-defined policies for access time, data movement, data replication, and data verification.
Due to the refinement and clarification in the storage tiering capability, early work on the Indexing capability is very promising. We had a skeleton framework up and running within 48 hours of beginning to code. We expect indexing and publishing to be part of the training at the User Group Meeting in Utrecht in June.
We spent a week testing and improving the Ceph RADOS resource plugin with a Docker compose setup provided by Consortium Member Maastricht University. It is now passing our test suite, manually.
With a goal of simplification and general purpose usage, we are questioning NFSRODS's current requirement of Kerberos authentication and authorization. We love removing code, so trusting the client's username could be quite rewarding.
We have demonstrated parallel builds and parallel testing of the iRODS core code via Docker. We have demontrated builds and tests of a single iRODS plugin via Docker. Adding parallel plugin build and test is the next step. After that, we only need to add topology testing (multiple-server Zone) and federation testing (multiple Zones).
Active Development Work
Python iRODS Client (PRC)
Cacheless and Detached S3
Access Protocols
iRODS Capability - Automated Ingest
Lustre Integration
Storage Tiering Capability Package
Metalnx packaging
Cloud Browser
Continuous Integration (CI)
Background Items
Python Rule Engine Plugin
CockroachDB Database Plugin
Multipart Transfer, v5 API
Indexing Capability
Swagger REST API