Difference between revisions of "GCube Document Library (2.0)"

From Gcube Wiki
Jump to: navigation, search
(Exclusion Constraints)
(Utilities & F.A.Q.)
 
(42 intermediate revisions by 2 users not shown)
Line 1: Line 1:
The '''gCube Document Library (gDL)''' is a client library for storing, updating, deleting and retrieving document description in a gCube infrastructure.  
+
The '''gCube Document Library (gDL)''' is a client library for adding, updating, deleting and retrieving document descriptions to, in, and from remote collections in a gCube infrastructure.  
  
 
The gDL is a high-level component of the subsystem of [[GCube_Information_Organisation_Services_(NEW)|gCube Information Services]] and it interacts with lower-level components of the subsystem to support document management processes within the infrastructure:
 
The gDL is a high-level component of the subsystem of [[GCube_Information_Organisation_Services_(NEW)|gCube Information Services]] and it interacts with lower-level components of the subsystem to support document management processes within the infrastructure:
  
 
* the [[GCube_Document_Model#Overview|gCube Document Model]] (gDM) defines the basic notion of document and the [[GCube_Document_Model#Implementation|gCube Model Library]] (gML) implements that notion into objects;
 
* the [[GCube_Document_Model#Overview|gCube Document Model]] (gDM) defines the basic notion of document and the [[GCube_Document_Model#Implementation|gCube Model Library]] (gML) implements that notion into objects;
* the objects of the gML can be exchanged in the infrastructure as edge-labelled trees, and the [[Content_Manager_Library|Content Manager Library]] (CML) can model such trees as objects and dispatch them to the read and write operations of the [[Content_Manager_(NEW)|Content Manager]] (CM) service;
+
* the objects of the gML can be exchanged in the infrastructure as edge-labelled trees, and the [[Content_Manager_Library|Content Manager Library]] (CML) can dispatch them to the read and write operations of the [[Content_Manager_(NEW)|Content Manager]] (CM) service;
* the CM implements these operations by translating trees to and from the content models of diverse repository back-ends.  
+
* the CM implements its operations by translating trees to and from the content models of diverse repository back-ends.  
  
 
The gDL builds on the gML and the CML to implement a local interface of <code>CRUD</code> operations that lift those of the CM to the domain of documents, efficiently and effectively.
 
The gDL builds on the gML and the CML to implement a local interface of <code>CRUD</code> operations that lift those of the CM to the domain of documents, efficiently and effectively.
Line 11: Line 11:
 
= Preliminaries =
 
= Preliminaries =
  
The core functionality of the gDL lies in its operations to read and write document descriptions. The operations trigger interactions with the [[Content_Manager_(NEW)|Content Manager]] service and the movement of potentially large volumes of data across the infrastructure. This may have a non-trivial impact on the responsiveness of clients and the overall load of the infrastructure. The operations have been designed to minimise this impact. In particular:
+
The core functionality of the gDL lies in its operations to read and write document descriptions. The operations trigger interactions with the [[Content_Manager_(NEW)|Content Manager]] service and the movement of potentially large volumes of data across the infrastructure. This may have a non-trivial and combined impact on the responsiveness of clients and the overall load of the infrastructure. The operations have been designed to minimise this impact. In particular:
  
* when reading, clients can qualify the documents that are relevant to their queries, and indeed what properties of those documents should be actually retrieved. These retrieval directives are captured in the gDL by the notion of [[#Projections|'document projections]].
+
* when reading, clients can qualify the documents that are relevant to their queries, and indeed what properties of those documents should be actually retrieved. These retrieval directives are captured in the gDL by the notion of [[#Projections|document projections]].
  
 
* when reading and writing, clients can move large numbers of documents across the infrastructure. The gDL ''streams'' this I/O movements so as to make efficient use of local and remote resources. It then defines a facilities with which clients can conveniently consume input streams, produce output streams, and more generally convert one stream into an other regardless of its origin. These facilities are collected into the [[#Streams|stream DSL]], an Embedded Domain-Specific Language (EDSL) for stream conversion and processing.  
 
* when reading and writing, clients can move large numbers of documents across the infrastructure. The gDL ''streams'' this I/O movements so as to make efficient use of local and remote resources. It then defines a facilities with which clients can conveniently consume input streams, produce output streams, and more generally convert one stream into an other regardless of its origin. These facilities are collected into the [[#Streams|stream DSL]], an Embedded Domain-Specific Language (EDSL) for stream conversion and processing.  
  
Understanding document projections and the stream DSL is key to reading and writing documents effectively. We discuss these preliminary concepts first, and then consider their use as input and outputs of the operations of the gDL.
+
Understanding document projections and the stream DSL is key to reading and writing documents effectively with the gDL. We discuss these preliminary concepts first, and then consider their use as input and outputs in read and write the operations of the library.
  
 
== Projections ==
 
== Projections ==
  
A projection is a set of constraints over the properties of documents. It can be be used in the [[#Reading Documents|read operations]] of the gDL to:
+
A projection is a set of constraints over the properties of document descriptions. It can be be used in the [[#Reading Documents|read operations]] of the gDL to:
  
* characterise relevant documents as those that ''match'' the constraints (''projections as types'');
+
* characterise relevant descriptions as those that ''match'' the constraints (''projections as types'');
* specify what properties of relevant documents should be retrieved (''projections as retrieval directives'').
+
* specify what properties of relevant descriptions should be retrieved (''projections as retrieval directives'').
  
 
Constraints take accordingly two forms:
 
Constraints take accordingly two forms:
Line 31: Line 31:
 
* '''filter constraints''' apply to properties that must be matched but ''not'' retrieved.  
 
* '''filter constraints''' apply to properties that must be matched but ''not'' retrieved.  
 
   
 
   
'''note''': in both cases, the constraints take the form of 'predicates' of the [[Content_Manager_Library|Content Manager Library]] (CML). The projection itself converts into a complex predicate which is amenable for processing by the Content Manager service in the execution of its retrieval operations. In this sense, projections are a key part of the document-oriented layer that the gDL defines over lower-level components of the gCube [[GCube_Information_Organisation_Services_(NEW)|subsystem dedicated]] to content management.
+
'''note''': in both cases, the constraints take the form of ''predicates'' of the [[Content_Manager_Library|Content Manager Library]] (CML). The projection itself converts into a complex predicate which is amenable for processing by the Content Manager service in the execution of its retrieval operations. In this sense, projections are a key part of the document-oriented layer that the gDL defines over lower-level components of the gCube [[GCube_Information_Organisation_Services_(NEW)|subsystem dedicated]] to content management.
  
 
As a first example, a projection may specify an include constraint over the name of metadata elements and a filter constraint over the time of last update. It may then be used to:
 
As a first example, a projection may specify an include constraint over the name of metadata elements and a filter constraint over the time of last update. It may then be used to:
  
* characterise documents with at least one metadata element that matches both constraints;  
+
* characterise document descriptions with at least one metadata element that matches both constraints;  
* retrieve of those documents only the name of matching metadata elements, excluding the time of last update, any other metadata property, and any other document property, include other inner elements and their properties.  
+
* retrieve of those descriptions only the name of matching metadata elements, excluding the time of last update, any other metadata property, and any other document property, include other inner elements and their properties.  
  
 
Projections have the <code>Projection</code> interface, which can be used to access their constraints in element-generic computations. To build projections, however, clients deal with one of the following implementation of the interface:
 
Projections have the <code>Projection</code> interface, which can be used to access their constraints in element-generic computations. To build projections, however, clients deal with one of the following implementation of the interface:
Line 52: Line 52:
 
allows clients to express constraints on the generic properties of documents and their inner elements.
 
allows clients to express constraints on the generic properties of documents and their inner elements.
  
=== Simple Projections ===
+
[[GDL_Projections_(2.0)|read more...]]
 
+
Clients create projections with the factory methods of the <code>Projections</code> companion class. A static import improves legibility and is recommended:
+
 
+
<source lang="java5" highlight="1">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
DocumentProjection dp = document();
+
 
+
MetadataProjection mp = metadata();
+
 
+
AnnotationProjection annp = annotation();
+
 
+
PartProjection pp = part();
+
 
+
AlternativeProjection altp = alteranative();
+
 
+
</source>
+
 
+
The projections above do not specify any include constraint or filter constraints on the elements of the corresponding type. For example, <code>dp</code> matches all documents, regardless of their properties, inner elements, and properties of their inner elements. Similarly, <code>mp</code> matches all metadata elements of any document, regardless of their properties, and <code>pp</code> matches all the parts of any document, regardless of their properties. In this sense, the factory methods of the <code>Projections</code> class return ''empty projections''.
+
 
+
Clients may add include constraints to a projection with the method <code>with()</code> of all projection classes. For document projections, for example:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
DocumentProjection dp = document().with(NAME);
+
</source>
+
 
+
With the above, the client adds the simplest form of constraint, an ''existence constraint'' that requires matching documents to have given properties, here only a name. Since this is an include constraint, the client is expressing an interest only in this property, regardless of the existence and values of other properties. Used as a parameter in the [[#Reading Documents|read operations]] of the gDL, this projection is translated into a directive to retrieve ''only'' the names of document(s) which have one.  
+
 
+
'''note''': properties are conveniently represented by constants in the <code>Projections</code> class. The constants are not strings, however, but dedicated <code>Property</code> objects specific to the type of projection. Trying to use properties that are undefined for the type of elements targeted by the projection is illegal and the error is detected statically.
+
 
+
Note that existence constraints may be expressed at once on multiple properties, e.g.:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
DocumentProjection dp = document().with(NAME,LANGUAGE,BYTESTREAM);
+
</source>
+
 
+
Besides inclusion constraints, clients may specify filter constraints with the method <code>where()</code> of all projection classes, e.g:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
DocumentProjection dp = document().where(NAME);
+
</source>
+
 
+
Now the client still requires documents to have a name but he retains an interest in the other properties of matching documents. Used as a parameter in the [[#Reading Documents|read operations]] of the gDL, this projection is translated into a directive to retrieve ''all'' the properties of documents with a name.
+
 
+
 
+
Include and filter constraints can be combined, and the projections classes follow a builder pattern to add readability to the combinations. In particular, <code>with()</code> and <code>where()</code> return the very projection on which they are invoked. They may then be used as follows:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
DocumentProjection dp = document().with(NAME,LANGUAGE)
+
                                  .where(BYTESTREAM);
+
</source>
+
 
+
Here, the client requires documents to have a name, a language, and to embed a bytestream. However, he has an interest in processing only document names and languages (e.g. for display purposes). Used as a parameter in the [[#Reading Documents|read operations]] of the gDL, this projection retrieves the requested information but avoids the transmission of bytestreams.
+
 
+
'''note''': Repeating the same property in <code>with()</code> and <code>where()</code> clauses, or else across the clauses, has a destructive effect, in that the last constraint overrides the previous ones. This allows clients to stage the construction of a projection across multiple components, where a component may wish to override what the constraints set by an upstream component.
+
Clients should be careful to avoid this repetition in other cases.
+
 
+
=== Optional Modifiers ===
+
 
+
Another common requirement is to indicate the optionality of properties. Clients may wish to include or filter by certain properties only if the properties actually exists. In this case, clients can use the <code>opt()</code> method of the <code>Projections</code> class as a constraint ''modifier''. The following example illustrates the point:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
DocumentProjection dp = document().with(NAME,opt(LANGUAGE))
+
                                  .where(BYTESTREAM);
+
</source>
+
 
+
This projection differs from the previous one only for the optional modifier on (the existence of) a language. Used as a parameter in the [[#Reading Documents|read operations]] of the gDL, this projection retrieves the name all documents that include a bytestream, but also their language ''if'' they do have one.
+
 
+
A common use of optional modifier is with bytestream, which clients may wish either to find included in the document or else referred to from the document:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
DocumentProjection dp = document().with(opt(BYTESTREAM),opt(URL));
+
</source>
+
 
+
Used as a parameter in the [[#Reading Documents|read operations]] of the gDL, this projection retrieves at most the bytestream and its URL for those documents that have both, only one of the two if the other is missing, and nothing at all if they are both missing.
+
 
+
'''note:''' The API allows optional modifiers in filter constraints too, but their application is rather pointless in this context (they will never exclude elements from retrieval).
+
 
+
=== Exclusion Constraints ===
+
 
+
Clients may wish to retrieve all the property documents ''but'' certain ones. As a  common case, clients may wish to exclude embedded bytestreams but otherwise retrieve any other property a document may have. To achieve this, client may use include constraints with optional modifiers on all the properties they wish to retain, e.g.:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
DocumentProjection dp = document().with(opt(NAME),opt(LANGUAGE),....); //but no bytestream
+
</source>
+
 
+
Clearly, the solution is cumbersome and will break if and when the model is extended. To improve matters, clients may use the method <code>without()</code>, which does precisely would clients would need to do explicitly and manually, e.g.:
+
 
+
<source lang="java5" highlight="3">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
DocumentProjection dp = document().without(BYTESTREAM);
+
</source>
+
 
+
'''note''': as usual, many properties can be excluded at once simply by listing them as parameters of <code>without()</code>.
+
 
+
'''note''': despite appearances, clients should not read the previous projection as characterising documents that do not have a bytestream. They should read it as an empty projection with exclusion directives.
+
 
+
Finally, note that care should be applied when chaining <code>without()</code> with <code>with()</code> and <code>where()</code> methods. This is because <code>without()</code> adds optional constraints implicitly on all properties that are not explicitly excluded from retrieval. Further inclusion constraints or filter constraints should be specified ''after'' invoking <code>without()</code>, or they will be overwritten by its optional constraints. On the other hand, combinations are possible and useful. For example, consider the following projection:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
DocumentProjection dp = document().without(BYTESTREAM).where(NAME);
+
</source>
+
 
+
Used with the [[#Reading_Documents|read operations]] of the gDL, this projection is translated into a directive to retrieve all the properties but the bytestream of all the documents that do have a name. However, the following projection:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
DocumentProjection dp = document().where(NAME).without(BYTESTREAM);
+
</source>
+
 
+
has a completely different and undesired effect. When it is used with the [[#Reading_Documents|read operations]] of the gDL, the projection is translated into a directive to retrieve all the properties but the bytestream of all the documents, whether they do or do not have a name. This is because the optional constraint on name implicitly specified with <code>without()</code> overwrites the filter constraint specified with <code>where()</code>.
+
 
+
=== Deep Projections ===
+
 
+
In the examples above, we have considered existence constraints on simple element properties. The examples generalise easily to repeated structured properties, such as generic properties for all elements and inner element properties for documents.
+
 
+
Consider the following example:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
DocumentProjection dp = document().with(PART, opt(METADATA), PROPERTY);
+
</source>
+
 
+
Here the client adds three include constraints to the projection, all three for the existence of repeated properties. Documents that match this projection have ''at least'' one part, ''at least'' one generic property, and zero or more metadata elements. Used as a parameter in the [[#Reading Documents|read operations]] of the gDL, this projection retrieves ''all' the parts and ''all'' the generic properties of documents that have at least one of each, as well as ''all'' of their the metadata elements if they happen to have some.
+
 
+
Repeated properties such as generic properties and inner elements are also structured, i.e. have properties of their own. Clients that wish to constrain those properties too can use ''deep projections'', i.e. embed within the projection of a given type one or more projections built for the structured properties of elements of that type. The following example illustrates the concept for metadata elements:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
MetadataProjection mp = meatadata().with(LANGUAGE).where(BYTESTREAM);
+
 
+
DocumentProjection dp = document().with(NAME, PART)
+
                                  .with(METADATA,mp);
+
 
+
</source>
+
 
+
The first projection constraints the existence of language and bytestream for metadata elements. The second projection constraints the existence of name and parts for document, as well as the existence of metadata elements that match the constraints of the first projection. The usual implications of include constraints and filter constraints apply. Used as a parameter in the [[#Reading Documents|read operations]] of the gDL, this projection retrieves the name, parts, and metadata elements of documents that have a name, at least one part, and at least one metadata element that includes a bystream. For the metadata elements, in particular, it retrieves only the language property.
+
 
+
Note that optionality constraints apply to deep projections as well as they apply to flat projections, as the following example shows:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
MetadataProjection mp = meatadata().with(LANGUAGE).where(BYTESTREAM);
+
 
+
DocumentProjection dp = document().with(NAME, PART)
+
                                  .with(opt(METADATA,mp));
+
 
+
</source>
+
 
+
This projection differs from the previous one only because the existence of on metadata elements that match the inner projection is optional. Documents that have a name and at least one part match the outer projection even if the have  ''no'' metadata elements that match the inner projection (or no metadata elements at all).
+
 
+
=== Projections over Generic Properties ===
+
 
+
Generic properties are repeated and structured properties common to all elements. As for other properties with these characteristics, clients may wish to build deep projections that constraints their inner properties. For this purpose, the class <code>Projections</code> includes a dedicated factory method <code>property()</code>, as well as as specialised methods to express constraints. The following example illustrates the approach:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
...
+
 
+
PropertyProjection pp = property().withKey("somekey").with(PROPERTY_TYPE);
+
 
+
DocumentProjection dp = document().with(NAME, PART)
+
                                  .with(PROPERTY,pp);
+
 
+
</source>
+
 
+
Here, the client creates a document projection and embeds in it an inner projection that constrains its generic properties.
+
The inner projection uses the method <code>with()</code> to add an include constraint for the existence of a type for the generic property, as usual.
+
It also adds an include constraint to specify an exact value for the key of a generic property of interest. This relies on a method <code>withKey()</code> which is specific to projection over generic properties of elements. The reason for this specific construct is that, differently from other constrainable properties of elements, they key of a generic property serves as its identifier.
+
 
+
For the rest, property projections behave like other projections (e.g. can be used with optional modifiers). Used as a parameter in the [[#Reading Documents|read operations]] of the gDL, the projection above matches documents with a name, at least one part, and a property with key <code>somekey</code> and some type.
+
 
+
=== Advanced Projections ===
+
 
+
In more advanced forms of projections, clients may wish to specify constraints on properties other than mere existence.
+
In these cases, they can use overloads of <code>with()</code> and <code>where()</code> that take as parameters <code>Predicate</code>s that capture the desired constraints.
+
As mentioned above, predicates are defined in the [[Content_Manager_Library|CML]] and gDL clients need to become acquainted with the range of available predicates and how to [[Content_Manager_(NEW)#Building_Predicates| build them]].
+
 
+
'''note''': Deep projections already make use of this customisability. When clients embed a projection into another, they constrain the corresponding structured property with the predicate into which the inner projection translates.
+
 
+
Commonly, clients may wish to constrain the value of a property, as in the following example:
+
 
+
<source lang="java5" highlight="2,3">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
import static org.gcube.contentmanagement.contentmanager.stubs.model.constraints.Constraints.*;
+
import static org.gcube.contentmanagement.contentmanager.stubs.model.predicates.Predicates.*;
+
...
+
DocumentProjection p = document().with(LANGUAGE,text(is("it"));
+
</source>
+
 
+
The client uses here the predicate <code>text(is("it"))</code> to constrain the language of documents to match the ISO639 code for the Italian language. As documented in the [[Content_Manager_(NEW)#Building_Predicates| CML]], the client builds the predicate with the static methods of the <code>Predicates</code> and <code>Constraints</code> classes, which he previously imports.
+
 
+
'''note''': in building predicate expressions with the API of the CML, clients take responsibility for associating properties with predicates that are compatible with their type. In the example above, the language of an element is a textual property and thus only <code>text()</code>-based predicates can successfully match it. The gDL relinquishes the ability to ensure the correct construction of projections so as to allow clients to use the full expressiveness of the predicate language of the CML.
+
 
+
The type of constraints that can be expressed on properties is thus bound by the expressiveness of the predicate language of the CML. We include here another example to illustrate some of the possibilities:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
import static org.gcube.contentmanagement.contentmanager.stubs.model.constraints.Constraints.*;
+
import static org.gcube.contentmanagement.contentmanager.stubs.model.predicates.Predicates.*;
+
...
+
Calendar from = ...
+
Calendar to = ....
+
DocumentProjection p = document().with(URL,uri(matches("^ftp.*")));
+
                                .where(CREATION_TIME,date(all(after(from),before(to))));
+
</source>
+
 
+
This projection is matched by documents that have been created at some point in between two dates, and with a bytestream available at some <code>ftp</code> server.  Used as a parameter in the [[#Reading Documents|read operations]] of the gDL, the projection would retrieve only the URL of (the bytestream of) matching documents.
+
  
 
== Streams ==
 
== Streams ==
Line 298: Line 69:
 
Streaming raises significant opportunities for clients, as well as non-trivial challenges. In recognition of the difficulties, the gDL includes a set of general-purpose facilities for stream conversion that simplify the tasks of filtering, transforming, or otherwise processing streams. These facilities are cast as the sentences of the '''Stream DSL''', an Embedded Domain-Specific Language (EDSL).
 
Streaming raises significant opportunities for clients, as well as non-trivial challenges. In recognition of the difficulties, the gDL includes a set of general-purpose facilities for stream conversion that simplify the tasks of filtering, transforming, or otherwise processing streams. These facilities are cast as the sentences of the '''Stream DSL''', an Embedded Domain-Specific Language (EDSL).
  
=== Standard and Remote Iterators ===
+
[[GDL_Streams_(2.0)|read more...]]
 
+
As all the sentences of the Stream DSL take and return streams, we begin by looking look at how streams are represented in the gDL.
+
 
+
Streams have the interface of ''iterators'', i.e. yield elements on demand and are typically  consumed within loops. There are two such interfaces:
+
 
+
* <code>Iterator&lt;T&gt;</code>, the standard Java interface for iterations.
+
* <code>RemoteIterator&lt;T&gt;</code>, a variation over <code>Iterator&lt;T&gt;</code> which marks explicitly the remote origin of the stream.
+
 
+
In particular, a <code>RemoteIterator</code> differs from a standard <code>Iterator</code> in two respects:
+
 
+
* the method <code>next()</code> may throw a checked <code>Exception</code>. This witnesses to the fact that iterating over the stream involves fallible I/O operations;
+
* there is a method <code>locator()</code> that returns a reference to the remote stream as a plain <code>String</code> in some implementation-specific syntax.
+
 
+
Locators aside, the key difference between the two interfaces is in their assumptions about the possibility of iteration failures. A standard <code>Iterator</code> does not present failures to its clients other than for requests made past end of the stream (an unchecked <code>NoSuchElementException</code>). This may be because failures do not occur at all, e.g. the iteration is over an in-memory collection; it may also be because the iterator knows how to handle failures when these occur. In this sense, <code>Iterator&lt;T&gt;</code> may well be defined over external, even remote collections, but it assumes that all failure handling policies are responsibilities of its implementations.
+
 
+
In contrast, <code>RemoteIterator&lt;T&gt;</code> makes it clear that:
+
 
+
* failures are likely to occur;
+
* ''clients'' are expected to handle them.
+
 
+
The operations of the gDL make use of both interfaces:
+
 
+
* when they ''take'' streams, they expect them as standard <code>Iterator</code>s;
+
* when they ''return'' streams, the provide them as <code>RemoteIterator</code>s.
+
 
+
This choice emphasises two points:
+
 
+
* streams that are provided by clients are of unknown origin, those provided by the library originate in remote services of the gCube Content Management infrastructure.
+
* all fault handling policies are in the hands of clients, where they should be. When clients provide an <code>Iterator</code> to the library, they will have embedded a fault handling policy in its implementation. When they receive a <code>RemoteIterator</code> from the library, they will apply a fault handling policy when consuming the stream.
+
 
+
=== Simple Conversions ===
+
 
+
The sentences of the DSL begin with ''verbs'', which can be statically imported from the <code>Streams</code> class:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
</source>
+
 
+
The verb <code>convert</code> introduces the simplest of sentences, those that convert between <code>Iterator</code>s and <code>RemoteIterator</code>s. The following example shows the conversion of an <code>Iterator</code> into a <code>RemoteIterator</code>:
+
 
+
<source lang="java5" highlight="4">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
Iterator<SomeType> it = ...
+
RemoteIterator<SomeType> rit = convert(it);
+
</source>
+
 
+
The result is a <code>RemoteIterator</code> that promises to return failures but never does. The implementation is just a wrapper around the standard <code>Iterator</code> which returns <code>it.toString()</code> as the locator of the underlying collection.
+
 
+
Converting a <code>RemoteIterator</code> to an <code>Iterator</code> is more interesting because it requires the encapsulation of a fault handling policy. The following example shows the possibilities:
+
 
+
<source lang="java5" highlight="6,9,12,14">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
RemoteIterator<SomeType> rit = ...
+
 
+
//iterator will return any fault raised by the remote iterator
+
Iterator<SomeType> it1 = convert(rit).with(IGNORE_POLICY);
+
 
+
//iterator will stop at the first fault raised by the remote iterator
+
Iterator<SomeType> it2 = convert(rit).with(FAILFAST_POLICY);
+
 
+
//iterator will handle fault as specified by given policy
+
FaultPolicy policy = new FaultPolicy() {...};
+
 
+
Iterator<SomeType> it3 = convert(rit).with(policy);
+
</source>
+
 
+
In this example, the clause <code>with()</code> introduces the fault handling policy to encapsulate in the resulting <code>Iterator</code>. Two common policies are predefined and can be named directly, as shown for <code>it1</code> and <code>it2</code> above:
+
 
+
* <code>IGNORE_POLICY</code>: any faults raised by the <code>RemoteIterator</code> are discarded by the resulting <code>Iterator<code>, which will ensure that <code>hasNext()>/code> and <code>next()</code> behave as if they had not occurred;
+
* <code>FAILFAST_POLICY</code>: the first fault raised by the <code>RemoteIterator</code> halts the resulting <code>Iterator</code>, which will ensure that <code>hasNext()>/code> and <code>next()</code> behave as if they stream had reached its natural end;
+
 
+
Custom policies can be defined by implementing the interface <code>FaultPolicy</code>:
+
 
+
<source lang="java5" highlight="3">
+
public interface FaultPolicy ... {
+
+
boolean onFault(Exception e, int count);
+
 
+
}
+
</source>
+
 
+
In <code>onFault()</code>, clients are passed the fault raised by the <code>RemoteIterator</code>, as well as the count of faults raised so far during the iteration (this will be greater than <code>1</code> only if the policy will have tolerated some previous faults during the iteration). Clients apply the policy and return <code>true</code> if the fault should be tolerated and the iteration continue, <code>false</code> if they instead wish the iteration to stop.  Here's an example of a fault handling policy that tolerates only the first error and uses two aliases for the boolean values to improve the legibility of the policy (<code>CONTINUE</code> and <code>STOP</code>, also defined in the <code>Streams</code> class and statically imported):
+
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
FaultPolicy policy = new FaultPolicy() {
+
   
+
      public boolean onFault(Exception e, int count) {
+
            if (count=1) {
+
                  ....dealing with fault ...
+
  return CONTINUE;
+
      }
+
            else
+
                  return STOP;
+
        }
+
};
+
</source>
+
 
+
Note also that the <code>IGNORE_POLICY</code> is the default policy from conversion to standard iterators. Clients can use the clause <code>withDefaults()</code> to avoid naming it.
+
 
+
<source lang="java5" highlight="6">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
RemoteIterator<SomeType> rit = ...
+
 
+
//iterator will handle faults with the default policy: IGNORE_POLICY
+
Iterator<SomeType> it = convert(rit).withDefaults();
+
</source>
+
 
+
Finally, note that stream conversions may also be applied between <code>RemoteIterator</code>s, so as to change their <code>FaultPolicy</code>:
+
 
+
<source lang="java5" highlight="6">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
RemoteIterator<SomeType> rit1 = ...
+
 
+
//iterator will handle faults with the default policy: IGNORE_POLICY
+
RemoteIterator<SomeType> rit2 = convert(rit1).withRemote(IGNORE_POLICY);
+
</source>
+
 
+
Here, the clause <code>withRemote()</code> introduces a fault policy for the <code>RemoteIterator</code> in output.  Fault policies for <code>RemoteIterator</code> are a superset of those that can be configured on standard <code>Iterator</code>s. In particular, they implement the interface <code>RemoteFaultPolicy</code>:
+
 
+
<source lang="java5" highlight="1,3">
+
public interface RemoteFaultPolicy ... {
+
+
boolean onFault(Exception e, int count) throws Exception;
+
 
+
}
+
</source>
+
 
+
Note that the only difference between a <code>FaultPolicy</code> and a <code>RemoteFaultPolicy</code> is that the latter has the additional option to raise a fault of its own in <code>onFault()</code>. Thus, when a fault occurs during iteration, the <code>RemoteIterator</code> can continue iterating, stop the iteration, but also ''re-throw'' the same or another fault to the iterating client, which is indeed what makes a <code>RemoteIterator</code> different from a standard <code>Iterator</code>.
+
 
+
In particular, the Stream DSL predefines a third policy which is available only for <code>RemoteIterator</code>s:
+
 
+
* <code>RETHROW_POLICY</code>: any faults raised during iteration will be immediately propagated to clients;
+
 
+
This is the in fact the default policy for <code>RemoteIterator</code>s and clients can use the clause <code>withRemoteDefaults()</code> to avoid naming it:
+
 
+
<source lang="java5" highlight="5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
RemoteIterator<SomeType> rit1 = ...
+
 
+
RemoteIterator<SomeType> rit2 = convert(rit1).withRemoteDefaults();
+
</source>
+
 
+
 
+
In summary, the Stream DSL allows clients to formulate the following sentences for simple stream conversion:
+
 
+
* <code>convert(Iterator)</code>: converts a standard <code>Iterator</code> into a <code>RemoteIterator</code>;
+
* <code>convert(RemoteIterator).with(FaultPolicy)</code>: converts a <code>RemoteIterator</code> into a standard <code>Iterator</code> that encapsulates a given <code>FaultPolicy</code>;
+
* <code>convert(RemoteIterator).withDefaults()</code>: converts a <code>RemoteIterator</code> into a standard <code>Iterator</code> that encapsulates the <code>IGNORE_POLICY</code> for faults;
+
* <code>convert(RemoteIterator).withRemote(RemoteFaultPolicy)</code>: converts a <code>RemoteIterator</code> into another <code>RemoteIterator</code> that encapsulates a given <code>RemoteFaultPolicy</code>;
+
* <code>convert(RemoteIterator).withRemoteDefaults()</code>: converts a <code>RemoteIterator</code> into another <code>RemoteIterator</code> that encapsulates the <code>RETHROW_POLICY</code> for faults;
+
 
+
 
+
==== ResultSet Conversions ====
+
 
+
A different but very common form of conversion is between gCube [[GCube_ResultSet_(gRS)|result sets]] and <code>RemoteIterator</code>s. While result sets are the preferred way of modelling remote streams within the system, their iterators do not natively implement the <code>RemoteIterator&lt;T&gt;</code> interface, which has been independently defined in the [[Content_Manager_Library|CML]] as an abstraction over an underlying result set mechanism. The CML defines an initial set of [[Content_Manager_Library#Iterators_and_Collections|facilities]] to perform the conversion from result sets of untyped string payloads to <code>RemoteIterator</code>s of typed objects. The Stream DSL builds on these facilities to cater for a few common conversions:
+
 
+
 
+
* <code>payloadsIn(RSLocator)</code>: converts an arbitrary result set into a <code>RemoteIterator<String></code> defined over the record payloads in the result set;
+
* <code>documentIn(RSLocator)</code>: converts a result set of <code>GCubeDocument</code> serialisations into a <code>RemoteIterator&lt;GCubeDocument&gt;</code>;
+
* <code>metadataIn(RSLocator)</code>: converts a result set of <code>GCubeDocument</code> serialisations into a <code>RemoteIterator&lt;GCubeMetadata&gt;</code> defined over the metadata elements of the <code>GCubeDocument</code>s in the result set;
+
* <code>annotationsIn(RSLocator)</code>: converts a result set of <code>GCubeDocument</code> serialisations into a <code>RemoteIterator&lt;GCubeAnnotations&gt;</code> defined over the annotations of the <code>GCubeDocument</code>s in the result set;
+
* <code>partsIn(RSLocator)</code>: converts a result set of <code>GCubeDocument</code> serialisations into a <code>RemoteIterator&lt;GCubePart&gt;</code> defined over the parts of the <code>GCubeDocument</code>s in the result set;
+
* <code>alternativesIn(RSLocator)</code>: converts a result set of <code>GCubeDocument</code> serialisations into a <code>RemoteIterator&lt;GCubeAlternative&gt;</code> defined over the alternatives of the <code>GCubeDocument</code>s in the result set;
+
 
+
 
+
Essentially, <code>documentsIn()</code> adapts the result set to a <code>RemoteIterator&lt;T&gt;</code> that parses documents as it iterates over their serialisations. The following methods do the same, but extract the corresponding <code>GCubeElement</code>s from the <code>GCubeDocument</code>s obtained from parsing. All the methods are based on the last one, <code>payloadsIn</code>, which is also immediately useful to feed result set of <code>GCubeDocument</code> identifiers to the [[#Reading_Documents|read operations]] the gDL that perform stream-based document lookups.  
+
 
+
'''note''': all the conversions above produce <code>RemoteIterator</code>s that return the locator of the original result set from invocations of <code>locator()</code>. Clients can use the locator to process the stream with standard set-based APIs, as usual.
+
 
+
The usage pattern is straightforward and combines with the previous conversions. The following example illustrates:
+
 
+
<source lang="java5" highlight="4">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
RSLocator rs = ...
+
Iterator<GCubeDocument> it = convert(documentsIn(rs)).with(FAILFAST_POLICY);
+
</source>
+
 
+
=== Piped Conversions ===
+
 
+
The conversions introduced [[#Stream_Conversions|above]] do not alter the original streams, i.e. the output iterators produce the same elements of the input iterators. The exception is with result set-based conversions: <code>documentsIn()</code> parses the untyped payloads of the input result sets into typed objects, while methods such as <code>metadataIn()</code> extract <code>GCubeMetadata</code> elements from <code>GCubeDocument</code>s. Parsing and extraction are only examples of the kind of post-processing that clients may wish to apply to the elements of existing stream  to produce a new stream of post-processed elements. All the remaining sentences of the Stream DSL are dedicated precisely to this kind of conversions.
+
 
+
Sentences introduced by the verb <code>pipe</code> take a stream and return a second stream that applies an arbitrary ''filter'' to the elements of the first stream, encapsulating a fault handing policy in the process. The following example illustrates basic usage:
+
 
+
<source lang="java5" highlight="5,7,12">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
Iterator<GCubeDocument> it1 = ...
+
 
+
Filter<GCubeDocument,String> filter = new Filter<GCubeDocument,String>() {
+
 
+
                  public String apply(GCubeDocument doc) throws Exception {
+
                          return doc.name();
+
                  }
+
};
+
 
+
Iterator<GCubeDocument> it2 = pipe(it1).though(filter).withDefaults();
+
</source>
+
 
+
Here, a standard <code>Iterator</code> of <code>GCubeDocument</code>s is piped through a filter that extracts the names of <code>GCubeDocument</code>s. The result is another standard <code>Iterator</code> that produces document names from the original stream. The clause <code>through()</code> introduces the filter on the output stream and, as already discussed for conversion methods, the clause <code>withDefaults()</code> configures the default <code>IGNORE_POLICY</code> for it.
+
 
+
As shown in the example, filters are implementations of the <code>Filter&lt;FROM,TO&gt;</code> interface. The method <code>apply()</code> is self-explanatory: clients are given the elements of the unfiltered stream as the filtered stream is being iterated over, and they have the onus to produce and return an element of the filtered stream. The only point worth stressing is that <code>apply()</code>s can throw a fault if it cannot produce an element of the filtered stream. The filtered stream passes these faults to the <code>FaultPolicy</code> configured for it. In this example, faults clearly cannot occur. If they did, however, the configured policy would simply ignore them, i.e. the problematic elements of the input stream would not contribute to the contents of the filtered stream.
+
 
+
In the example the input stream and the filtered one are both standard <code>Iterator</code>s. The construct, however, is generic and can be used to filter any form of stream into any other. In this sense, the construct embeds stream conversions within its clauses. As an example, consider the common case in which a <code>RemoteIterator</code> is filtered into a standard <code>Iterator</code>:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
RemoteIterator<GCubeDocument> rit = ...
+
 
+
Filter<GCubeDocument,SometType> filter = ....;
+
 
+
Iterator<SomeType> it = pipe(rit).though(filter).with(FAILFAST_POLICY);
+
</source>
+
 
+
Here, <code>filter</code> is applied to the elements of a <code>RemoteIterator</code> to produce a standard <code>Iterator</code> that stops as soon as the input stream raises a fault.
+
Conversely, in the following example:
+
 
+
<source lang="java5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
RemoteIterator<GCubeDocument> rit1 = ...
+
 
+
Filter<GCubeDocument,SometType> filter = ....;
+
 
+
RemoteIterator<SomeType> rit2 = pipe(rit1).though(filter).withRemote(IGNORE_POLICY);
+
</source>
+
 
+
Here, <code>filter</code> is applied to the elements of a <code>RemoteIterator</code> to produce yet another <code>RemoteIterator</code> that ignores any fault raised by the input iterator.
+
 
+
 
+
To conclude with <code>pipe</code>-based sentences, note that the Stream DSL includes <code>Processor&lt;T&gt;</code>, a base implementation of <code>Filter&ltFROM,TO&gt;</code> that simplifies the declaration of filters that simply mutate the input and return it. The following example illustrates usage:
+
 
+
<source lang="java5" highlight="5,7">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
RemoteIterator<GCubeDocument> rit1 = ...
+
 
+
Processor<GCubeDocument> processor = new Processor<GCubeDocument>() {
+
 
+
            public void process(GCubeDocument doc) throws Exception {
+
                      doc.setName(doc.name()+"-modified");
+
} ;
+
 
+
RemoteIterator<GCUBEDocument> rit2 = pipe(rit1).though(processor).withRemoteDefaults();
+
</source>
+
 
+
Here, the <code>processor</code> simply updates the <code>GCubeDocument</code>s in the input stream by changing their name. The output stream thus returns the same elements of the input stream, albeit updated. During iteration, its policy is simply to re-throw any fault that may be raised by the input iterator.
+
 
+
 
+
In summary, the Stream DSL allows clients to formulate the following sentences for piped stream conversion:
+
 
+
* <code>pipe(Iterator|RemoteIterator).through(Filter|Processor).with(FaultPolicy)</code>: uses a given <code>Filter</code> or <code>Processor</code> to convert a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a standard <code>Iterator</code> that encapsulates a given <code>FaultPolicy</code>;
+
* <code>pipe(Iterator|RemoteIterator).through(Filter|Processor).withDefaults()</code>: uses a given <code>Filter</code> or <code>Processor</code> to convert a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a standard <code>Iterator</code> that encapsulates a <code>IGNORE_POLICY</code> for faults;
+
* <code>pipe(Iterator|RemoteIterator).through(Filter|Processor).withRemote(RemoteFaultPolicy)</code>: uses a given <code>Filter</code> or <code>Processor</code> to convert a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a <code>RemoteIterator</code> that encapsulates a given <code>RemoteFaultPolicy</code>;
+
* <code>pipe(Iterator|RemoteIterator).through(Filter|Processor).withRemoteDefaults()</code>: uses a given <code>Filter</code> or <code>Processor</code> to convert a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a <code>RemoteIterator</code> that encapsulates the <code>RETHROW_POLICY</code> for faults;
+
 
+
 
+
=== Folding Conversions ===
+
 
+
With <code>pipe</code>-based sentences, clients can filter the elements of a stream into the elements of another streams. While the elements of the two stream can vary arbitrarily in type, the correspondence between elements of the two streams is fairly strict: for each element of the input stream there may be at most one element of the output stream (elements that raise iteration failures in the input stream may have no counterpart in the output stream, i.e. may be discarded). In this sense, the streams are always consumed ''in phase''.
+
 
+
In some cases, however, clients may wish to:
+
 
+
* ''fold'' a stream, i.e. produce another stream that has one <code>List</code> element for each ''N'' elements of the original stream;
+
* ''unfold'' a stream, i.e. produce another stream that has ''N'' elements for each element in the original stream.
+
 
+
Conceptually, these requirements are still within the scope of filtering, but the fact that the consumption of the filtered stream is  ''out of phase'' with respect to the unfiltered stream requires a rather different treatment. For this reason, the Stream DSL offers two dedicated classes of sentences:
+
 
+
* <code>group</code>-based sentences for stream folding;
+
* <code>unfold</code>-based sentences for stream unfolding.
+
 
+
To fold a stream, clients indicate how many elements of the stream should be grouped into elements of the folded stream, what filter should be applied to each of the elements of the stream and, as usual, what fault handling policy should be used for the folded stream. The following example illustrates usage in the common case in which a <code>RemoteIterator</code> is folded into a standard <code>Iterator</code>:
+
 
+
<source lang="java5" highlight="7">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
RemoteIterator<GCubeDocument> rit = ...
+
 
+
Filter<GCubeDocument,SometType> filter = ....;
+
 
+
Iterator<List<SomeType>> it = group(rit).in(10).pipingThrough(filter).withDefaults();
+
</source>
+
 
+
The <code>RemoteIterator</code> is here folded in <code>List</code>s of <code>10</code> elements, (or smaller, if the end of the input stream is reached before a <code>List</code> of The clause <code>in()</code> indicates the maximum size of the output <code>List</code>s. Each of the <code>GCubeDocument</code>s in the original stream is then passed through <code>filter</code>, which produces one of the <code>List</code> elements for it. The clause <code>pipingThrough</code> allows the configuration of the filer. Finally, the default <code>IGNORE_POLICY</code> is set on the folded stream with the clause <code>withDefaults()</code>, meaning that any fault raised by the <code>RemoteIterator</code> ''or'' <code>filter</code> will be tolerated and the element that caused the failure will simply not contribute to the accumulation of the next <code>10</code> elements of the folded stream.
+
 
+
'''note''': the example shows the folding of a <code>RemoteIterator</code> into a standard <code>Iterator</code> but, as for all the sentences of the DSL, all combinations of input and output streams are possible, with the usual implications on the fault handing policies that can be set on the folded stream and with the optional choice of <code>Processor</code>s over <code>Filter</code>s in cases where folding simply groups updated elements of the stream.
+
 
+
It is a common requirement to fold a stream without transforming or altering otherwise its elements. In this case, the clause <code>pipingThrough</code> can be omitted altogether from the sentence:
+
 
+
<source lang="java5" highlight="5">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
RemoteIterator<GCubeDocument> rit = ...
+
 
+
Iterator<List<GCubeDocument>> it = group(rit).in(10).withDefaults();
+
</source>
+
 
+
Effectively, the stream is here being filtered with a ''pass-through'' filter that simply returns the elements of the unfolded streams. As we shall see, t his kind of folding is particularly useful to 'slice' a stream in small in-memory collections that can be used with  the [[#Adding_Documents|write operations]] of the gDL that work in bulk and by-value.
+
 
+
 
+
In summary, the Stream DSL allows clients to formulate the following sentences for folding stream conversion:
+
 
+
* <code>group(Iterator|RemoteIterator).in(N).pipingThrough(Filter|Processor).with(FaultPolicy)</code>: uses a given <code>Filter</code> or <code>Processor</code> to <code>N</code>-fold a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a standard <code>Iterator</code> that encapsulates a given <code>FaultPolicy</code>;
+
* <code>group(Iterator|RemoteIterator).in(N).pipingThrough(Filter|Processor).withDefaults()</code>: uses a given <code>Filter</code> or <code>Processor</code> to <code>N</code>-fold a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a standard <code>Iterator</code> that encapsulates the <code>IGNORE_POLICY</code> for faults;
+
* <code>group(Iterator|RemoteIterator).in(N).pipingThrough(Filter|Processor).withRemote(RemoteFaultPolicy)</code>: uses a given <code>Filter</code> or <code>Processor</code> to <code>N</code>-fold a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a <code>RemoteIterator</code> that encapsulates a given <code>RemoteFaultPolicy</code>;
+
* <code>group(Iterator|RemoteIterator).in(N).pipingThrough(Filter|Processor).withRemoteDefaults()</code>: uses a given <code>Filter</code> or <code>Processor</code> to <code>N</code>-fold a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a <code>RemoteIterator</code> that encapsulates the <code>RETHROW_POLICY</code> for faults;
+
* <code>group(Iterator|RemoteIterator).in(N).with(FaultPolicy)</code>: uses a ''pass-through'' filter to <code>N</code>-fold a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a standard <code>Iterator</code> that encapsulates a given <code>FaultPolicy</code>;
+
* <code>group(Iterator|RemoteIterator).in(N).withDefaults()</code>: uses a ''pass-through'' filter to <code>N</code>-fold a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a standard <code>Iterator</code> that encapsulates the <code>IGNORE_POLICY</code> for faults;
+
* <code>group(Iterator|RemoteIterator).in(N).withRemote(RemoteFaultPolicy)</code>: uses a ''pass-through'' filter to <code>N</code>-fold a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a <code>RemoteIterator</code> that encapsulates a given <code>RemoteFaultPolicy</code>;
+
* <code>group(Iterator).in(N).withRemoteDefaults()</code>: uses a ''pass-through'' filter to <code>N</code>-fold a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a <code>RemoteIterator</code> that encapsulates the <code>RETHROW_POLICY</code> for faults
+
 
+
 
+
=== Unfolding Conversions ===
+
 
+
Unfolding a stream follows a similar pattern, as shown in the following example:
+
 
+
<source lang="java5" highlight="7">
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.streams.dsl.Streams.*;
+
...
+
RemoteIterator<GCubeDocument> rit = ...
+
 
+
Filter<GCubeDocument,List<SometType>> filter = ....;
+
 
+
Iterator<SomeType> it = unfold(rit).pipingThrough(filter).withDefaults();
+
</source>
+
 
+
This time we cannot dispense with using a <code>Filter</code>, which is necessary to map a single element of the stream into a <code>List</code> of elements that the unfolded stream, a standard <code>Iterator</code> in this example, will then yield one at the time at the client's demand. As usual, all combinations of standard <code>Iterator</code>s, <code>RemoteIterator</code>s, and fault handling policies are allowed. Using <code>Processor</code>s is instead disallowed here, as it's in the nature of unfolding to convert a element into a number of different elements. Unfolding and updates, in other words, do not interact well.
+
 
+
The most common application of unfolding is for the extraction of inner elements from documents, e.g. unfold a stream of <code>GCubeDocument</code>s into a stream of <code>GCubeMetadata</code> elements, where each element in the unfolded stream belongs to some <code>GCubeDocument</code> in the document stream. Accordingly, the Stream DSL predefines a comprehensive number of these unfoldings. We have seen some of them [[#ResultSet_Conversions|already]], where the document input stream was in the form of a result set (e.g. <code>metadataIn(RSLocator)</code>). Similar unfoldings are directly available on <code>RemoteIterator<GCubeDocument></code>s.
+
 
+
 
+
In summary, the Stream DSL allows clients to formulate the following sentences for unfolding stream conversion:
+
 
+
* <code>unfold(Iterator|RemoteIterator).pipingThrough(Filter).with(FaultPolicy)</code>: uses a given <code>Filter</code> to unfold a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a standard <code>Iterator</code> that encapsulates a given <code>FaultPolicy</code>;
+
** <code>unfold(Iterator|RemoteIterator).pipingThrough(Filter).withDefaults()</code>: uses a given <code>Filter</code> to unfold a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a standard <code>Iterator</code> that encapsulates the <code>IGNORE_POLICY</code> for faults;
+
* <code>unfold(Iterator|RemoteIterator).pipingThrough(Filter).withRemote(RemoteFaultPolicy)</code>: uses a given <code>Filter</code> to unfold a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a standard <code>Iterator</code> that encapsulates a given <code>RemoteFaultPolicy</code> for faults
+
* <code>unfold(Iterator|RemoteIterator).pipingThrough(Filter).withRemoteDefaults()</code>: uses a given <code>Filter</code> to unfold a standard <code>Iterator</code> or a <code>RemoteIterator</code> into a <code>RemoteIterator</code> that encapsulates the <code>RETHROW_POLICY</code> for faults;
+
* <code>metadataIn(Iterator<GCubeDocument>|RemoteIterator<GCubeDocument>)</code>: unfolds a standard <code>Iterator&lt;GCubeDocument&gt;</code> or a <code>RemoteIterator&lt;GCubeDocument&gt;</code> into, respectively, a <code>Iterator&lt;GCubeMetadata&gt;</code> or a <code>RemoteIterator&lt;GCubeMetadata&gt;</code> defined over the metadata elements of the <code>GCubeDocument</code>s in the original stream;
+
* <code>annotationsIn(Iterator<GCubeDocument>|RemoteIterator<GCubeDocument>)</code>: unfolds a standard <code>Iterator&lt;GCubeDocument&gt;</code> or a <code>RemoteIterator&lt;GCubeDocument&gt;</code> into, respectively, a <code>Iterator&lt;GCubeAnnotation&gt;</code> or a <code>RemoteIterator&lt;GCubeAnnotation&gt;</code> defined over the annotations of the <code>GCubeDocument</code>s in the original stream;
+
* <code>partsIn(Iterator<GCubeDocument>|RemoteIterator<GCubeDocument>)</code>: unfolds a standard <code>Iterator&lt;GCubeDocument&gt;</code> or a <code>RemoteIterator&lt;GCubeDocument&gt;</code> into, respectively, a <code>Iterator&lt;GCubePart&gt;</code> or a <code>RemoteIterator&lt;GCubePart&gt;</code> defined over the parts of the <code>GCubeDocument</code>s in the original stream;
+
* <code>alternativesIn(Iterator<GCubeDocument>|RemoteIterator<GCubeDocument>)</code>: unfolds a standard <code>Iterator&lt;GCubeDocument&gt;</code> or a <code>RemoteIterator&lt;GCubeDocument&gt;</code> into, respectively, a <code>Iterator&lt;GCubeAlternative&gt;</code> or a <code>RemoteIterator&lt;GCubeAlternative&gt;</code> defined over the alternatives of the <code>GCubeDocument</code>s in the original stream;
+
  
 
= Operations =
 
= Operations =
  
The operations of the gDL allows clients to create, retrieve, update, and delete document descriptions that persist within the infrastructure. While these </code>CRUD</code> operations target (instances of) a specific back-end within the infrastructure,  the [[Content_Manager_(NEW)|Content Manager]] service, it is  a direct implication of the design of that service that the document descriptions may be stored in repositories which are inside or outside the strict boundaries of the infrastructure, possibly in pairwise divergent forms. While the gDL operations clearly expose the remote nature of document descriptions, the actual location of document descriptions, hosting repositories, and Content Manager instances is hidden to their clients.
+
The operations of the gDL allows clients to add, update, delete, and retrieve document descriptions to, in, and from remote collections within the infrastructure. These <code>CRUD</code> operations target (instances of) a specific back-end within the infrastructure,  the [[Content_Manager_(NEW)|Content Manager]] (CM) service. It is  a direct implication of the CM that the document descriptions may be stored in different forms within repositories which are inside or outside the strict boundaries of the infrastructure. While the gDL operations clearly expose the remote nature of document descriptions, the actual location of document descriptions, hosting repositories, and Content Manager instances is hidden to their clients.
  
In what follows, we discuss first ''read operations'', i.e. operations that localise document descriptions which already persist within the infrastructure. We then discuss ''write operations'', i.e. operations that persist within the infrastructure document descriptions which have been created or else modified locally. In all cases, operations are overloaded to work with different forms of inputs and outputs. In particular, we distinguish between:
+
In what follows, we discuss first ''read operations'', i.e. operations that localise document descriptions from remote collections. We then discuss ''write operations'', i.e. operations that persist in remote collections document descriptions which have been created or else modified locally. In all cases, operations are overloaded to work with different forms of inputs and outputs. In particular, we distinguish between:
  
* '''singleton operations''': these are operations that read, create, or change individual document descriptions. Singleton operations are used for punctual interactions with the infrastructure, most noticeably those required by front-end clients to implement user interfaces. All singleton operations that target existing document descriptions require the specifications of their identifiers;
+
* '''singleton operations''': these are operations that read, add, or change individual document descriptions. Singleton operations are used for punctual interactions with the infrastructure, most noticeably those required by front-end clients to implement user interfaces. All singleton operations that target existing document descriptions require the specifications of their identifiers;
 
   
 
   
* '''bulk operations''': these are operations that read, create, or change multiple document descriptions in a single interaction with the infrastructure. Bulk operations can be used for batch interactions with the infrastructure, most noticeably those required by back-end clients to implement workflows. They can also be used for real-time interactions with the infrastructure, such as those required by front-end clients that process user queries. Bulk operations may be further classified in:
+
* '''bulk operations''': these are operations that read, add, or change multiple document descriptions in a single interaction with the infrastructure. Bulk operations can be used for batch interactions with the infrastructure, most noticeably those required by back-end clients to implement workflows. They can also be used for real-time interactions with the infrastructure, such as those required by front-end clients that process user queries. Bulk operations may be further classified in:
 
** '''by-value operations''' are defined over in-memory collections of document descriptions. Accordingly, these operations are indicated for small-scale data transfer scenarios. As we shall see, they may also be used to move segments of larger data collections, when the creation of such fragments is a functional requirement.
 
** '''by-value operations''' are defined over in-memory collections of document descriptions. Accordingly, these operations are indicated for small-scale data transfer scenarios. As we shall see, they may also be used to move segments of larger data collections, when the creation of such fragments is a functional requirement.
 
** '''by-reference operations''' are defined over [[#Streams|streams]] of document descriptions. These operations are indicated for medium-scale to large-scale data transfer scenarios, where the streamed processing promotes the responsiveness of clients and the effective use of network resources.
 
** '''by-reference operations''' are defined over [[#Streams|streams]] of document descriptions. These operations are indicated for medium-scale to large-scale data transfer scenarios, where the streamed processing promotes the responsiveness of clients and the effective use of network resources.
Line 666: Line 87:
 
Finally, read and write operations build on the facilities of the [[Content_Manager_Library| Content Manager Library]] (CML) to interact with the Content Manager service, including the adoption of [[Content_Manager_Library#High-Level Calls|best-effort strategies]] to discover and interact with instances of the service. These facilities are thus indirectly available to gDL clients as well.
 
Finally, read and write operations build on the facilities of the [[Content_Manager_Library| Content Manager Library]] (CML) to interact with the Content Manager service, including the adoption of [[Content_Manager_Library#High-Level Calls|best-effort strategies]] to discover and interact with instances of the service. These facilities are thus indirectly available to gDL clients as well.
  
== Reading Documents ==
+
[[GDL_Operations_(2.0)|read more...]]
  
Clients that wish to retrieve document descriptions invoke the operations of a <code>DocumentReader</code>, which executes them against a given collection of documents, the ''bound collection'', as made available in a given scope:
+
= Views =
  
<source lang="java5" highlight="4" >
+
Some clients interact with remote collections to work exclusively with subsets of document descriptions that share certain properties, e.g. are in a given language, have changed in the last month, have metadata in a given schema, have parts of a given type, and so on. Their queries and updates are always resolved within these subsets, rather than the whole collection. Essentially, such clients have their own ''view'' of the collection.
GCubeScope scope = ...
+
String collectionID =...
+
 
+
DocumentReader reader = new DocumentReader(collectionID,scope);
+
</source>
+
 
+
In a secure infrastructure, the credentials provided by a security manager are also required:
+
 
+
<source lang="java5" highlight="5" >
+
GCUBEScope scope = ...
+
GCUBESecurityManager manager = ...
+
String collectionID =...
+
 
+
DocumentReader reader = new DocumentReader(collectionID,scope,manager);
+
</source>
+
 
+
Readers expose three <code>get()</code> operations, all of which can be parameterised with [[#Projections|projections]]:
+
 
+
* the first two operations ''lookup'' document descriptions from their identifiers, one as a singleton operation (takes a single identifier) and the other as a bulk operation by-reference (takes a stream of identifiers). In both cases, a projection specifies matching requirements and retrieval directives (include constraints and filter constraints). Lookup operations fail if identifiers cannot be resolved, or if the target collection is not available for read-access in the target scope, if the remote interactions
+
 
+
* that last operation ''retrieves'' document descriptions that match a given projection, returning the descriptions in accordance with its retrieval directives (include constraints and filter constraints). It is also a bulk operation by-reference (returns a stream of document descriptions).
+
 
+
The operations can be illustrated as follows:
+
 
+
<source lang="java5" highlight="6,9,12" >
+
DocumentReader reader = ...
+
 
+
DocumentProjection p = ....
+
 
+
String id = ...
+
GCubeDocument doc = reader.get(id,p);
+
 
+
Iterator<String> ids = ...
+
RemoteIterator<GCubeDocument> docs = reader.get(ids,p);
+
 
+
 
+
RemoteIterator<GCubeDocument> docs = reader.get(p);
+
</source>
+
 
+
A few points are worth emphasising:
+
 
+
The operation <code>get(Iterator,Projection)</code> takes a stream of identifiers under the standard <code>Iterator</code> interface. As discussed at length [[#Local_And_Remote_Iterators|above]], this indicates that the operation makes no assumption as to the origin of the stream and that it has no policy of its own to deal with possible iteration failures; clients need to provide one in the implementation of the <code>Iterator</code>. Conversely, the operation <code>get(Projection)</code> returns a <code>RemoteIterator</code> because it can guarantee the remote origin of the stream, though it still has no policy of its own to handler possible iteration failures; again, clients are responsible for this. Most importantly, clients are strongly recommended to use the facilities of the [[#Streams|Stream DSL]], to convert to derive the <code>Iterator</code>s in input from other form of streams, and to post-process the <code>RemoteIterator</code>s in output.
+
 
+
On a different note, not that, as a convenience, all the <code>get()</code> operations can take projections other than <code>DocumentProjection</code>s. Projections over the inner elements of documents are equally accepted, e.g.:
+
 
+
<source lang="java5" highlight="4" >
+
DocumentReader reader = ...
+
MetadataProjection mp = ....
+
 
+
RemoteIterator<GCubeDocument> docs = reader.get(mp);
+
</source>
+
 
+
Here, matched documents are characterised directly with a <code>MetadataProjection</code>. The operation will derive a corresponding <code>DocumentProjection</code> with a single include constraint that requires matching documents to have that ''at least'' one metadata element that satisfy the projection. As usual, the output stream will retrieve of such documents no more than what the original <code>MetadataProjection</code> specifies in its include constraints. Again, clients are recommended to use the [[#Streams|Stream DSL]] to extract the metadata elements from the output stream and possibly to process it further, e.g.:
+
 
+
<source lang="java5" highlight="4" >
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
 
+
DocumentReader reader = ...
+
MetadataProjection mp = ....
+
 
+
RemoteIterator<GCubeMetadata> metadata = metadataIn(reader.get(mp));
+
</source>
+
 
+
Similarly, the [[#Streams|Stream DSL]] can be relied upon in the common case in which input streams originate in remote result sets, or when the output streams must be computed over using the result set API. The following example illustrates some of the possibilities:
+
 
+
<source lang="java5" highlight="10,12,15" >
+
import static org.gcube.contentmanagement.gcubedocumentlibrary.projections.Projections.*;
+
 
+
DocumentReader reader = ...
+
MetadataProjection mp = ....
+
 
+
//a result set of document identifiers
+
RSLocator idRS = ....
+
 
+
//extracts identifiers from result set into remote iterator and converts it into a local iterator
+
Iterator<String> ids = convert(payloadsIn(idRS)).withDefaults();
+
 
+
RemoteIterator<GCubeMetadata> metadata = metadataIn(reader.get(ids,mp));
+
 
+
//extract result set locator from remote iterator
+
RSLocator docRS = new RSLocator(metadata.locator());
+
 
+
//use locator with result set API
+
...
+
</source>
+
 
+
Finally, note that the example above does not handle possible failures. Clients may consult the code documentation for a list of the faults that the individual operations may raise.
+
 
+
== Creating Documents ==
+
 
+
== Updating Documents ==
+
 
+
== Deleting Documents ==
+
 
+
= Views =
+
  
== Transient Views ==
+
The gDL offers support for working with two types of view:
  
== Persistent Views ==
+
* '''local views''': these are views defined by individual clients as the context for a number of subsequent queries and updates. Local views may have arbitrary long lifetimes, and may even outlive the client that created them, they are never used by multiple clients. Thus local views are commonly transient and if their definitions are somehow persisted, they are persisted locally to the 'owning' client and remain under its direct responsibility.
  
== Creating Views ==
+
* '''remote views''': these are views defined by some clients and used by many others within the system. Remote views outlive all such clients and persist in the infrastructure, typically for as long as the collection does. They are defined through the [[View_Manager|View Manager]] service (VM), which materialises them as WS-Resources. Each VM resource encapsulates the definition of the view as well as its descriptive properties, and it is responsible for managing its lifetime, e.g. keep track of its cardinality and notify interested clients of changes to its contents. However, VM resources are [[View_Manager#Motivations|passive'']], i.e. do not mediate access to those content resources.
  
== Discovering Views ==
+
Naturally, the gDL uses [[#Projections|projections]] as view definitions. It then offers specialised <code>Reader</code>s that encapsulate such projections to implicitly resolve all their operations in the scope of the view. This yields view-based access to collections and allows clients to work with local views. In addition, the gDL provides local proxies of VM resources with which clients can create, discover, and inspect remote views. As these proxies map remote view definitions onto projections, remote views can be accessed with the same reading mechanisms available for local views.
  
== Using Views ==
+
[[GDL_Views_(2.0)|read more...]]
  
= Advanced Topics =
+
= Utilities & F.A.Q. =
  
== Caches ==
+
The GCube Document Library offers utility classes to manage the collections and the views in the system.
  
== Buffers ==
+
[[GDL_Utilities_%26_F.A.Q._(2.0)|read more...]]

Latest revision as of 13:21, 21 March 2011

The gCube Document Library (gDL) is a client library for adding, updating, deleting and retrieving document descriptions to, in, and from remote collections in a gCube infrastructure.

The gDL is a high-level component of the subsystem of gCube Information Services and it interacts with lower-level components of the subsystem to support document management processes within the infrastructure:

  • the gCube Document Model (gDM) defines the basic notion of document and the gCube Model Library (gML) implements that notion into objects;
  • the objects of the gML can be exchanged in the infrastructure as edge-labelled trees, and the Content Manager Library (CML) can dispatch them to the read and write operations of the Content Manager (CM) service;
  • the CM implements its operations by translating trees to and from the content models of diverse repository back-ends.

The gDL builds on the gML and the CML to implement a local interface of CRUD operations that lift those of the CM to the domain of documents, efficiently and effectively.

Preliminaries

The core functionality of the gDL lies in its operations to read and write document descriptions. The operations trigger interactions with the Content Manager service and the movement of potentially large volumes of data across the infrastructure. This may have a non-trivial and combined impact on the responsiveness of clients and the overall load of the infrastructure. The operations have been designed to minimise this impact. In particular:

  • when reading, clients can qualify the documents that are relevant to their queries, and indeed what properties of those documents should be actually retrieved. These retrieval directives are captured in the gDL by the notion of document projections.
  • when reading and writing, clients can move large numbers of documents across the infrastructure. The gDL streams this I/O movements so as to make efficient use of local and remote resources. It then defines a facilities with which clients can conveniently consume input streams, produce output streams, and more generally convert one stream into an other regardless of its origin. These facilities are collected into the stream DSL, an Embedded Domain-Specific Language (EDSL) for stream conversion and processing.

Understanding document projections and the stream DSL is key to reading and writing documents effectively with the gDL. We discuss these preliminary concepts first, and then consider their use as input and outputs in read and write the operations of the library.

Projections

A projection is a set of constraints over the properties of document descriptions. It can be be used in the read operations of the gDL to:

  • characterise relevant descriptions as those that match the constraints (projections as types);
  • specify what properties of relevant descriptions should be retrieved (projections as retrieval directives).

Constraints take accordingly two forms:

  • include constraints apply to properties that must be matched and retrieved;
  • filter constraints apply to properties that must be matched but not retrieved.

note: in both cases, the constraints take the form of predicates of the Content Manager Library (CML). The projection itself converts into a complex predicate which is amenable for processing by the Content Manager service in the execution of its retrieval operations. In this sense, projections are a key part of the document-oriented layer that the gDL defines over lower-level components of the gCube subsystem dedicated to content management.

As a first example, a projection may specify an include constraint over the name of metadata elements and a filter constraint over the time of last update. It may then be used to:

  • characterise document descriptions with at least one metadata element that matches both constraints;
  • retrieve of those descriptions only the name of matching metadata elements, excluding the time of last update, any other metadata property, and any other document property, include other inner elements and their properties.

Projections have the Projection interface, which can be used to access their constraints in element-generic computations. To build projections, however, clients deal with one of the following implementation of the interface:

  • DocumentProjection
  • MetadataProjection
  • AnnotationProjection
  • PartProjection
  • AlternativeProjection

A further implementation of the interface:

  • PropertyProjection

allows clients to express constraints on the generic properties of documents and their inner elements.

read more...

Streams

In some of its operations, the gDL relies on streams to model, process, and transfer large-scale data collections. Streams may consist of document descriptions, document identifiers, and document updates. More generally, they may consist of the outcomes of operations that take in turn large-scale collections in input. Streamed processing makes efficient use of both local and remote resources, from local memory to network bandwidth, promoting the overall responsiveness of clients and services through reduced latencies.

Clients that use these operations will need to route streams towards and across the operations of the gDL, converting between different stream interfaces, often injecting application logic in the process. As a common example, a client may need to:

  • route a remote result set of document identifiers towards the read operations of the gDL;
  • process the document descriptions returned by the read operations, e.g. in order to update some of their properties;
  • feed the modified document descriptions to the write operations of the gDL, so as to commit the changes;
  • inspect commit outcomes, so as to report or otherwise handle the failures that may have occurred in the process.

Throughout the workflow, it is important that the client remains within the paradigm of streamed processing, avoiding the accumulation of data in memory in all cases but where strictly required. Document identifiers will be streaming from the remote location of the original result set as documents descriptions will be flowing back from yet another remote location, as updated document descriptions will be leaving towards the same remote location, and as failures will be steadily coming back for handling.

Streaming raises significant opportunities for clients, as well as non-trivial challenges. In recognition of the difficulties, the gDL includes a set of general-purpose facilities for stream conversion that simplify the tasks of filtering, transforming, or otherwise processing streams. These facilities are cast as the sentences of the Stream DSL, an Embedded Domain-Specific Language (EDSL).

read more...

Operations

The operations of the gDL allows clients to add, update, delete, and retrieve document descriptions to, in, and from remote collections within the infrastructure. These CRUD operations target (instances of) a specific back-end within the infrastructure, the Content Manager (CM) service. It is a direct implication of the CM that the document descriptions may be stored in different forms within repositories which are inside or outside the strict boundaries of the infrastructure. While the gDL operations clearly expose the remote nature of document descriptions, the actual location of document descriptions, hosting repositories, and Content Manager instances is hidden to their clients.

In what follows, we discuss first read operations, i.e. operations that localise document descriptions from remote collections. We then discuss write operations, i.e. operations that persist in remote collections document descriptions which have been created or else modified locally. In all cases, operations are overloaded to work with different forms of inputs and outputs. In particular, we distinguish between:

  • singleton operations: these are operations that read, add, or change individual document descriptions. Singleton operations are used for punctual interactions with the infrastructure, most noticeably those required by front-end clients to implement user interfaces. All singleton operations that target existing document descriptions require the specifications of their identifiers;
  • bulk operations: these are operations that read, add, or change multiple document descriptions in a single interaction with the infrastructure. Bulk operations can be used for batch interactions with the infrastructure, most noticeably those required by back-end clients to implement workflows. They can also be used for real-time interactions with the infrastructure, such as those required by front-end clients that process user queries. Bulk operations may be further classified in:
    • by-value operations are defined over in-memory collections of document descriptions. Accordingly, these operations are indicated for small-scale data transfer scenarios. As we shall see, they may also be used to move segments of larger data collections, when the creation of such fragments is a functional requirement.
    • by-reference operations are defined over streams of document descriptions. These operations are indicated for medium-scale to large-scale data transfer scenarios, where the streamed processing promotes the responsiveness of clients and the effective use of network resources.

Read and write operations work with document descriptions that align with the gCube document model (gDM) and its implementation in the gCube Model Library (gML). In the terminology of the gML, in particular, operations that create document descriptions expect new elements, all the others take or produce element proxies.

Finally, read and write operations build on the facilities of the Content Manager Library (CML) to interact with the Content Manager service, including the adoption of best-effort strategies to discover and interact with instances of the service. These facilities are thus indirectly available to gDL clients as well.

read more...

Views

Some clients interact with remote collections to work exclusively with subsets of document descriptions that share certain properties, e.g. are in a given language, have changed in the last month, have metadata in a given schema, have parts of a given type, and so on. Their queries and updates are always resolved within these subsets, rather than the whole collection. Essentially, such clients have their own view of the collection.

The gDL offers support for working with two types of view:

  • local views: these are views defined by individual clients as the context for a number of subsequent queries and updates. Local views may have arbitrary long lifetimes, and may even outlive the client that created them, they are never used by multiple clients. Thus local views are commonly transient and if their definitions are somehow persisted, they are persisted locally to the 'owning' client and remain under its direct responsibility.
  • remote views: these are views defined by some clients and used by many others within the system. Remote views outlive all such clients and persist in the infrastructure, typically for as long as the collection does. They are defined through the View Manager service (VM), which materialises them as WS-Resources. Each VM resource encapsulates the definition of the view as well as its descriptive properties, and it is responsible for managing its lifetime, e.g. keep track of its cardinality and notify interested clients of changes to its contents. However, VM resources are passive, i.e. do not mediate access to those content resources.

Naturally, the gDL uses projections as view definitions. It then offers specialised Readers that encapsulate such projections to implicitly resolve all their operations in the scope of the view. This yields view-based access to collections and allows clients to work with local views. In addition, the gDL provides local proxies of VM resources with which clients can create, discover, and inspect remote views. As these proxies map remote view definitions onto projections, remote views can be accessed with the same reading mechanisms available for local views.

read more...

Utilities & F.A.Q.

The GCube Document Library offers utility classes to manage the collections and the views in the system.

read more...