Difference between revisions of "File-Based Access"

From Gcube Wiki
Jump to: navigation, search
(Overview)
Line 6: Line 6:
 
== Overview ==
 
== Overview ==
  
 +
Remote access and storage of unstructured bytestreams, or files, can be provided through a standards-based, POSIX-like API which supports the organisation and operations normally associated with local file systems.
  
 
+
The API is provided by a set of components, most noticeably a client library and a service based on a range of site-local back-ends, including MongoDB and Terrastore.
The Library permits manipulation of remote files like a local fileSystem.  
+
The library acts a facade to the service and allows clients to download, upload, remove, add, and list files.  
Clients can download, upload remove and add a new file object in a remote File Storage Service.  
+
 
Also is possible remove the contents of a remote directory or show the list of object in a remote directory.
 
Also is possible remove the contents of a remote directory or show the list of object in a remote directory.
  
Files may be downloaded and shared by several users. Based on the permission given by the owner of the file.
+
Files have owners and owners may allow a range of users to downloaded and share the files
the library can interface with different backends like: MongoDB or Terrastore.
+
Through the use of metadata, the library allows hierarchical organisations of the data against the flat storage provided by the service's back-ends.
Providing a common interface to the user who uses, regardless of the backend used.
+
 
+
The library can interface with different backends for storing files, such as MongoDB or Terrastore.
+
The library offers to the user a common interface that is used regardless of the backend.
+
The backend is responsible for storing files, the library rather than to convey the right way storage requirements.
+
Through the use of metadata, the library allows clients to organize files in a structured way, even if the backend used and flat type.
+
 
+
  
 
=== Key features ===
 
=== Key features ===
Line 26: Line 19:
 
The subsystem comprises the following components:
 
The subsystem comprises the following components:
  
;Structured-files storage
+
;structured file storage
:supports structured data. Clients can build folders tree and files. Folders are made through the use of metadata, so even if the File Storage System does not allow the use of structured data, clients can also build directory on a flat storage like mongoDB.
+
:Clients can create folder hierarchies, where folders are encoded as file metadata and do not require direct support in the storage back-end.
;Secure storage:
+
 
:access to data is so authenticated: userID, groupID. Each file has an owner and for each file the client can specify access rights:
+
;secure file storage  
::private: read and write access allowed only to the owner of the file;  
+
: File access is authenticated against access rights set by file owners, including private, group, and public access rights;
::shared: read and write access granted to all group members;
+
 
::public: read access and write to all users allowed.
+
;scalable file storage
;Data balancing:
+
:files are stored in chunks and chunks are distributed across clusters of servers based on the workload of individual servers;
:the files are organized in chunks and are distribuited on all the servers of backend file storage system based on the actual load of each server
+
 
;Scalability:  
+
;fault-tolerant file storage:  
:horizontal scalability and replication are provided which are necessary functions for large deployments. The horizontal scalability is guaranteeed because the files are organized in chunks and are distribuited on all the servers of backend file storage system
+
:file are asynchronously replicated across servers of clusters for data recovery and redundancy.
;Data replication:
+
:supports asynchronous replication of data between servers for failover and redundancy.
+
  
 
== Design ==
 
== Design ==

Revision as of 16:34, 28 February 2012

Part of the Data Access and Storage Facilities, a cluster of components within the system focus on standards-based and structured access and storage of files of arbitrary size.

This document outlines their design rationale, key features, and high-level architecture, as well as the options for their deployment.


Overview

Remote access and storage of unstructured bytestreams, or files, can be provided through a standards-based, POSIX-like API which supports the organisation and operations normally associated with local file systems.

The API is provided by a set of components, most noticeably a client library and a service based on a range of site-local back-ends, including MongoDB and Terrastore. The library acts a facade to the service and allows clients to download, upload, remove, add, and list files. Also is possible remove the contents of a remote directory or show the list of object in a remote directory.

Files have owners and owners may allow a range of users to downloaded and share the files Through the use of metadata, the library allows hierarchical organisations of the data against the flat storage provided by the service's back-ends.

Key features

The subsystem comprises the following components:

structured file storage
Clients can create folder hierarchies, where folders are encoded as file metadata and do not require direct support in the storage back-end.
secure file storage
File access is authenticated against access rights set by file owners, including private, group, and public access rights;
scalable file storage
files are stored in chunks and chunks are distributed across clusters of servers based on the workload of individual servers;
fault-tolerant file storage
file are asynchronously replicated across servers of clusters for data recovery and redundancy.

Design

Philosophy

Navigating through folders on a remote storage system, having the ability to download and upload files, masking the backend system. This is the main goal of this library The library is thinked for preserving a unified interface that aligns with their generality and encapsulates them from the variety of File Storage Service Backend. The two layer: core and wrapper library permit the use of the library in standalone mode or in the Gcube framework.


Architecture

The library is divided in two layer: a core library and a wrapper library. The core library is for generic purpose use, external to gCube framework. The wrapper library is thinked for use internal on gCube framework. The interaction between these two levels permits the use of the library within the framework gCube. The wrapper library interacts with IS for discover server resources that will be used from the core library. The core library interacts with a File Storage Service backend. The file Storage Service has the responsability of data storing. At this time there are 2 kind of file storage service supported: Terrastore and MongoDB.


File based access is provided by the following components:


Core library
Implements a high-level facade to the remote APIs of the File Storage Service. The core dialogues directly with a File storage Service that is responsibles for storing data. This level has the responsibility to split files into chunks if the size exceeds a certain threshold, to build the metadata such as: owner, type of object (file or directory), access permissions, etc. .. It also has the task of issuing commands to the File Storage System for the construction of the tree of folders by metadatas if any were needed
Wrapper library
Is a wrapper library for gCube framework. This library has the task of capturing the resources made available in the framework Gcube and pass them to the core library
File Storage Service
Is a service that have the responsability of remote data storage. This is invoked by the core library and can be based on differents technology like MongoDB, Terrastore.


The following diagram illustrates the dependencies between the components of the subsystem:


File Access Architecture

Deployment

The deployment of this library has the focus on installing the File Storage System. The File Storage System is installed in a static, not dynamic capabilities based on the load of requests. Therefore it is very important to choose the correct installation according to the needs. As the number of servers dedicated to storage of data, not only increases the storage capacity, but it also improves the balance of the data and therefore the response time. On the other hand, if the storage requirements are few and the number of servers is large, there will be a waste of resources that will be little used


Large Deployment

A large deployment consists of an instalation of a cluster of server dedicated to storage. Our current implementation uses a MongoDB File Storage Service. The servers are organized into MongoDB shards: Each shard consists of one or more servers and stores data using mongod processes (mongod being the core MongoDB database process). In a production situation, each shard will consist of multiple servers to ensure availability and automated failover. The set of servers/mongod process within the shard comprise a replica set.

In MongoDB, sharding is the tool for scaling a system, and replication is the tool for data safety, high high availability, and disaster recovery. The two work in tandem yet are are orthogonal concepts in the design.


Large Deployment Architecture

Small Deployment

A small deployment consists