Difference between revisions of "Data Transformation"

From Gcube Wiki
Jump to: navigation, search
(Programs)
 
(111 intermediate revisions by 5 users not shown)
Line 1: Line 1:
== Metadata Broker ==
+
== gCube Data Transformation Service ==
 +
 
 
=== Introduction ===
 
=== Introduction ===
 +
The gCube Data Transformation service is responsible for transforming content and metadata among different formats and specifications. gDTS lies on top of Content and Metadata Management  services. It interoperates with these components in order to retrieve information objects and store the transformed ones. Transformations can be performed offline and on demand on a single object or on a group of objects.
 +
 +
gDTS employs a variety of pluggable converters in order to transform digital objects between arbitrary content types, and takes advantage of extended information on the content types to achieve the selection of the appropriate conversion elements.
 +
 +
As already mentioned, the main functionality of the gCube Data Transformation Service is to convert digital objects from one content format to another. The conversions will be performed by transformation programs which either have been previously defined and stored or are composed on-the-fly during the transformation process. Every transformation program (except for those which are composed on-the-fly) is stored in the IS.
 +
 +
The gCube Data Transformation Service is presumed to offer a lot of benefits in many aspects of gCube. Presentation layer benefits from the production of alternative representations of multimedia documents. Generation of thumbnails, transformations of objects to specific formats that are required by some presentation applications and projection of multimedia files with variable quality/bitrate are just some examples of useful transformations over multimedia documents. In addition, as conversion tool for textual documents, it will offer online projection of documents in html format and moreover any other downloadable formats as pdf or ps. Annotation UI can be implemented more straightforward on selected logical groups of content types (e.g. images) without caring about the details of the content and the support offered by the browsers. Finally, by utilizing the functionality of Metadata Broker, homogenization of metadata with variable schemas can be achieved.
 +
 +
=== Concepts ===
 +
 +
==== Content Type ====
 +
 +
In gDTS the content type identification and notation conforms to the MIME type specification described in RFC 2045 and RFC 2046. This provides compliance with mainstream applications such as browsers, mail clients, etc. In this context, a document’s content type is defined by the media type, the subtype identifier plus a set of parameters, specified in an “attribute=value” notation. This extra information is exploited by data converters capable of interpreting it.
 +
 +
Some examples of content formats of information objects are:
 +
 +
- Mimetype=”image/png”, width=”500”, height=”500” that denotes that the object’s format is the well known portable network graphics and the image’s width and height is 500 pixel.
 +
 +
- Mimetype=”text/xml”, schema=”dc”, language=”en”, denotes that the object is an xml document with schema Dublin Core and language English.
 +
 +
==== Transformation Units ====
 +
 +
A transformation unit describes the way a program can be used in order to perform a transformation from one or more source content type to a target content type. The transformation unit determines the behaviour of each program by providing proper program parameters potentially drawn from the content type representation. Program parameters may contain string literals and/or ranges (via wildcards) in order to denote source content types. In addition, the wildcard ‘-’ can be used to force the presence of a program parameter in the content types set by the caller which uses this specific transformation unit.
 +
Furthermore, transformation units may reference other transformation units and use them as “black-box” components in a transformation process. Thus, each transformation unit is identified by the pair (transformation program id, transformation unit id).
 +
 +
==== Transformations' Graph ====
  
The main functionality of the Metadata Broker is to convert XML documents from some input schema and/or language to another. The inputs and outputs of the transformation process can be single records, ResultSets or entire collections. In the special case where both the inputs and the output are collections, a ''persistent'' transformation is possible, meaning that whenever there is a change in the input collection(s), the new data will be automatically transformed in order for the change to be reflected to the output collection.
+
Through transformation units new content types and program capabilities are published to gDTS. These units compose a transformation graph, with nodes and edges that correspond to content types and transformation units respectively. During initialisation, transformation graph is constructed locally through the published information stored in IS and gets updated periodically. Using this graph we are able to find a path of transformation units so as to perform an object transformation from its content type (source) to a target content type.
  
 
==== Transformation Programs ====
 
==== Transformation Programs ====
  
Complex transformation processes are described by ''transformation programs'', which are XML documents. Transformation programs are stored in the IS. Each transformation program can reference other transformation programs and use them as “black-box” components in the transformation process it defines.
+
A transformation program is an xml document describing one or more possible transformations from a source content type to a target content type. Each transformation program references to at most one program and it contains one or more transformation units for each possible transformation. Transformation programs are stored in the IS.
  
Each transformation program consists of:
+
Complex transformation processes are also described by ''transformation programs''. Each transformation program can reference other transformation programs and use them as “black-box” components in the transformation process it defines. Each transformation program consists of:
 
* One or more data input definitions. Each one defines the schema, language and type (record, ResultSet or collection) of the data that must be mapped to the particular input.
 
* One or more data input definitions. Each one defines the schema, language and type (record, ResultSet or collection) of the data that must be mapped to the particular input.
 
* One or more input variables. Each one of them is placeholder for an additional string value which must be passed to the transformation program at run-time.
 
* One or more input variables. Each one of them is placeholder for an additional string value which must be passed to the transformation program at run-time.
Line 15: Line 42:
  
 
'''Note''': The name of the input or output schema must be given in the format '''''SchemaName=SchemaURI''''', where SchemaName is the name of the schema and SchemaURI is the URI of its definition, e.g. '''<nowiki>DC=http://dublincore.org/schemas/xmls/simpledc20021212.xsd</nowiki>'''.
 
'''Note''': The name of the input or output schema must be given in the format '''''SchemaName=SchemaURI''''', where SchemaName is the name of the schema and SchemaURI is the URI of its definition, e.g. '''<nowiki>DC=http://dublincore.org/schemas/xmls/simpledc20021212.xsd</nowiki>'''.
 +
 +
Samples of transformation programs can be found on [https://gcube.wiki.gcube-system.org/gcube/index.php/Creating_Indices_at_the_VO_Level#DataTransformation_Programs Creating Indices at the VO Level] admin guide.
  
 
==== Transformation Rules ====
 
==== Transformation Rules ====
Line 41: Line 70:
 
A ''program'' (not to be confused with ''transformation program'') is the Java class which performs the actual transformation on the input data. A transformation rule is just a XML description of the interface (inputs and output) of a program.
 
A ''program'' (not to be confused with ''transformation program'') is the Java class which performs the actual transformation on the input data. A transformation rule is just a XML description of the interface (inputs and output) of a program.
  
There are no specific methods that the Java class of a program should define in order to be invokable from the Metadata Broker. Each program can define any number of methods, but when the transformation rule which references it is executed, the Metadata Broker service will use reflection in order to locate the correct method to call based on the input and output types defined in the transformation rule that initiates the call to the program's transformation method. The execution process is the following:
+
Each program can define any number of methods, but when the transformation rule which references it is executed, the service will use reflection in order to locate the correct method to call based on the input and output types defined in the transformation rule that initiates the call to the program's transformation method. The execution process is the following:
* A client invokes the Metadata Broker requesting the execution of a transformation program.
+
* A client invokes DTS requesting the execution of a transformation program.
 
* For each transformation rule found in the transformation program:
 
* For each transformation rule found in the transformation program:
** The Metadata Broker reads the schema, language and type of the transformation rule's inputs, as well as the actual payloads given as inputs. The output format descriptor is also read.
+
** DTS reads the schema, language and type of the transformation rule's inputs, as well as the actual payloads given as inputs. The output format descriptor is also read.
** Based on this information, the Metadata Broker constructs one or more DataSource and/or VariableType objects and a DataSink object, which are wrapper classes around the transformation rule's input and output descriptors. For each input of type 'Record', 'Collection' or 'ResultSet', a DataSource object is created, while a VariableType object is created for every input of type 'Variable'.
+
** Based on this information, DTS constructs one or more DataSource and a DataSink object, which are wrapper classes around the transformation rule's input and output descriptors.
 
** The program to be invoked for the transformation is read from the transformation rule.
 
** The program to be invoked for the transformation is read from the transformation rule.
** The Metadata Broker uses reflection in order to locate the transformation method to be called inside the program. This is done through the input and output descriptors of the transformation rule, based on the following rules:
+
** A transformation plan is constructed which is passed to workflowDTSAdaptor in order to construct an execution plan implementing this transformation.
*** If the transformation rule defines N inputs and one output (where N>=1), the method that will be called should take N+3 parameters.
+
** DTS uses reflection in order to locate the transformation method to be called inside the program. This is done through the input and output descriptors of the transformation rule.
*** When the method is called, the first N parameters are the constructed DataSource or VariableType objects that wrap the actual inputs of the transformation rule.
+
 
*** Parameter N+1 is the constructed DataSink object that wraps the actual data output of the transformation rule.
+
Generally speaking, the main logic in a program will be something like this:
*** Parameter N+2 is a BrokerStatistics object, which can be used for the logging of performance metrics during the transformation.
+
* while (source.hasNext()) do the following:
*** Parameter N+3 is a SecurityManager object, which can be used for the handling of credentials and scoping if other services need to be invoked during the transformation.
+
** sourceElement = source.getNext();
 +
** (transform sourceElement to produce 'transformedPayload')
 +
** destElement = sink.getNewDataElement(sourceElement, transformedPayload);
 +
** sink.writeNext(destElement);
 +
* sink.finishedWriting();
  
 
=== Implementation Overview ===
 
=== Implementation Overview ===
  
The metadata broker consists of two components:
+
The gCube Data Transformation Service is primarily comprised by the Data Transformation Service component which basically implements the WS-Interface of the Service, the Data Transformation Library which carries out the basic functionality of the Service i.e. the selection of the appropriate conversion element and the execution of the transformation over the information objects, a set of data handlers which are responsible to fetch and return/store the information objects and finally the conversion elements “Programs” that perform the conversions.
* '''The metadata broker service'''<br>The metadata broker service provides the functionality of the metadata broker in the form of a stateless service. In the case of a persistent transformation, the service creates a WS-Resource holding information about this transformation and registers for notifications concerning changes in the input collection(s). The created resources are not published and remain completely invisible to the caller.<br><br>The service exposes the following operations:<br>
+
** <tt>transform(TransformationProgramID, params) -> String</tt><br>This operation takes the DiligentID of a transformation program stored in the DIS and a set of transformation parameters. The referenced transformation program is executed using the provided parameters, which are just a set of value assignments to variables defined inside the transformation program. The metadata broker library contains a helper class for creating such a parameter set.
+
** <tt>transformWithNewTP(TransformationProgram, params) -> String</tt><br>This operation offers the same functionality as the previous one. However, in this case the first parameter is the full XML definition of a transformation program in string format and not the DiligentID of a stored one.
+
** <tt>findPossibleTransformationPrograms (InputDesc, OutputDesc) -> TransformationProgram[]</tt><br>This operation takes the description of some input format (type, language and schema) as well as the description of a desired output format, and returns an array of transformation programs definitions that could be used in order to perform the required conversion. These transformation programs may not exist before invoking this operation. They are produced on the fly, by combining all the existing transformation programs which are compatible with each other, trying to synthesize more complex transformation programs. Of course, if there is already an existing transformation program which is applicable for the requested type of transformation, it is included in the results. If the output format is null, then the returned array contain all transformation programs that can be applied to the specified input format, producing any possible output format.<br><br>
+
  
* '''The metadata broker library'''<br>The metadata broker library contains the definitions of the RecordType, CollectionType, ResultSetType and VariableType Java classes, as well as the definition of the Program Java interface. The following programs are also included in it:
+
==== Data Transformation Service ====
** Generic XSLT record-to-record transformer (GXSLT_Rec2Rec): transforms a given record using a given XSLT definition. The output is the transformed record.
+
** Generic XSLT ResultSet-to-ResultSet transformer (GXSLT_RS2RS): transforms a given ResultSet using a given XSLT definition, producing a new ResultSet. The output is the new ResultSet's EPR.
+
** Generic XSLT Collection-to-Collection transformer (GXSLT_Col2Col): transforms a given collection using a given XSLT, producing a new colletion. The output is the new collection id.
+
** Generic XSLT ResultSet-to-Collection transformer (GXSLT_RS2Col): transforms the records of  a given ResultSet using a given XSLT, and adds them to a new collection with caller-defined attributes. The output is the new collection id.
+
** Generic XSLT Collection-to-ResultSet transformer (GXSLT_Col2RS): transforms each record of a given collection using a given XSLT and creates a new ResultSet containing the transformed records. The output is the new ResultSet's EPR.<br>
+
:The transformation of metadata using any of the above programs, except for the GXSLT_Rec2Rec program, is a non-blocking operation. This means that the caller will not block until the transformation is completed, since the process of transforming a big ResultSet or collection may be quite time-consuming. For this purpose, each program prepares the output data (which is either the endpoint reference of the output ResultSet or the ID of the output collection, depending on the output data type of the transformation) which should be returned to the caller and then spawns a new thread to perform the transformation process.
+
:Internally, some programs depend on others, meaning that they use other programs in order to avoid useless code duplication. For instance, the GXSLT_Rec2Rec program is used by every other program because the transformation of any complex type of data input (such as ResultSets or collections) finally comes down to transforming single records one-by-one. Of course the XSLTs are always compiled before performing bulk transformations, in order to make the whole process faster.
+
:Each program is placed in a java package of its own, beginning with ‘org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs’. However, this is just a convention followed for the default programs contained in the metadata broker library. There is no restriction on the package names of user-defined programs. In order for user-defined programs to be accessible by the Metadata Broker, they should be put in JAR files and copied to the ‘lib’ directory under the installation directory of ws-core (or to any directory that belongs to the CLASSPATH environment variable).
+
  
=== Dependencies ===
+
The Data Transformation Service component implements the WS-Interface of the Service. Basically, it is the entry point for the services that Data Transformation Library provides. It's main operation is to check the parameters of the invocation, instantiate the data handlers, invoke the appropriate method of Data Transformation Library and inform clients for any possible faults.
  
* MetadataBrokerService
+
A Data Transformation Service's RI operates successfully over multiple scopes by keeping any necessary information for each scope independently.
** jdk 1.5
+
** WS-Core
+
** MetadatBrokerLibrary
+
** DISHLSClient
+
* MetadataBrokerLibrary
+
** jdk 1.5
+
** WS-Core
+
** ResultSet bundle
+
** DISHLSClient
+
** Metadata catalog service stubs
+
** Metadata catalog library
+
  
=== Usage Examples ===
+
==== Data Transformation Library ====
  
The following examples show how some of the transformation programs contained in the metadata broker library can be used. For this purpose, the client-side code is shown, describing the necessary steps to invoke the operations of the metadata broker service. Furthermore, the full definition of the referenced programs and transformation programs is also given. These definitions can be used as the base for creating new programs and transformation programs by anyone who needs to do this.
+
Inside the Data Transformation Library it is implemented the core functionality of gDTS. The Data Transformation Library contains variable packages which are responsible for different features of gDTS. The basic class of the Data Transformation Library is the DTSCore which is the class responsible to orchestrate the rest of the components.
 +
A DTSCore instance contains a transformations graph which is responsible to determine the transformation that will be performed (if the transformation program / unit is not explicitly set), as well as an information manager (IManager) which is the class responsible to fetch information about the transformation programs. The implementation of IManager that is currently used by the gDTS is the ISManager which fetches, publishes and queries transformation programs from the [[gCore Based Information System]].
  
==== Transforming a single record using a XSLT ====
+
The following diagram depicts the operations applied by Data Transformation Library on data elements when a request for a transformation is made and a transformation unit has not been explicitly set.
  
This is the <tt>GXSLT_Rec2Rec</tt> class (included in the metadata broker library), which performs the actual conversion:
+
[[Image:Data_transformation_deployment_large.jpg|thumb|none|756px|Data Transformation Operational Diagram]]
  
<pre>
+
For each data element the Data Elements Broker reads its content type and by utilizing the transformations graph, determines the proper Transformation Unit that is going to perform the transformation. Each transformation path, consisted by Transformation Units, is added to the transformation plan that will be passed to WorkflowDTSAdaptor. Then each object which has the same content type is also ignored. If another object comes from the Data Source which has a different content type, the Data Elements Broker uses again the Transformations Graph and a new Transformation plan is created. If the graph does not manage to find any applicable transformation unit for a data element, it is just ignored as well as the rest which have the same content type.
package org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.GXSLT_Rec2Rec;
+
  
import org.apache.log4j.Logger;
+
Apart from the core functionality, in the Data Transformation Library are also contained the interfaces of the [[Data_Transformation#Data_Transformation_Programs | Programs]] and [[Data_Transformation#Data_Transformation_Handlers | Data Handlers]] which have to be adopted by any program or data handler implementation. Finally, report and query packages contain classes for the reporting and querying functionalities respectively.
import org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.Program;
+
import org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.RecordType;
+
import org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.VariableType;
+
import org.diligentproject.metadatamanagement.metadatabrokerlibrary.util.GenericResourceRetriever;
+
  
import java.io.StringReader;
+
==== WorkflowDTSAdaptor ====
import java.io.StringWriter;
+
WorkflowDTSAdaptor takes a transformatin plan as an input, provided by [https://gcube.wiki.gcube-system.org/gcube/index.php/Data_Transformation#Data_Transformation_Library Data Transformation Library]. For each plan a new [https://gcube.wiki.gcube-system.org/gcube/index.php/Execution_Engine#Execution_Plan Execution Plan] is instantiated as a transformation chain, described by the transformation plan. In this way, a different transformation chain is created for every content type of data elements provided by the source. Then transformation plan retrieves every data element of that particular content type and applies the transformation. A Data Bridge and Program instances are created and the data element is appended to the source Data Bridge of the Transformation Unit.
import java.rmi.RemoteException;
+
  
import javax.xml.transform.OutputKeys;
+
In the Transformation Unit then, the data elements are transformed one by one by the program and the result is appended into the target data bridge contained in the transformation unit. Then these objects are finally merged by the Data Source Merger which reads in parallel objects from all the transformation chains and appends them into the Data Sink.
import javax.xml.transform.Templates;
+
import javax.xml.transform.Transformer;
+
import javax.xml.transform.TransformerConfigurationException;
+
import javax.xml.transform.TransformerFactory;
+
import javax.xml.transform.stream.StreamResult;
+
import javax.xml.transform.stream.StreamSource;
+
  
public class GXSLT_Rec2Rec implements Program {
+
==== Data Transformation Handlers ====
+
private static Logger log = Logger.getLogger(GXSLT_Rec2Rec.class);
+
private StringWriter output;
+
private static TransformerFactory factory = TransformerFactory.newInstance();
+
+
public void transform(RecordType record, VariableType xslt, RecordType outRecord) throws RemoteException {
+
try {
+
                    String xsltdef = GenericResourceRetriever.retrieveGenericResource(xslt.getReference());
+
            Transformer t = factory.newTransformer(new StreamSource(new StringReader(xsltdef)));
+
            t.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes");
+
            output = new StringWriter();
+
            t.transform(new StreamSource(new StringReader(record.getReference())), new StreamResult(output));
+
        } catch(Exception e) {
+
        log.error("Failed to transform record. Throwing exception.");
+
        throw new RemoteException(e.toString());
+
        }
+
}
+
+
public void transform(String record, Templates xslt) throws RemoteException {
+
      try {
+
            Transformer t = xslt.newTransformer();
+
            t.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes");
+
            output = new StringWriter();
+
            t.transform(new StreamSource(new StringReader(record)), new StreamResult(output));
+
        } catch(Exception e) {
+
        log.error("Failed to transform record. Throwing exception.");
+
        throw new RemoteException(e.toString());
+
        }
+
}
+
+
public static Templates compileXSLT(String xslt) throws TransformerConfigurationException {
+
return factory.newTemplates(new StreamSource(new StringReader(xslt)));
+
}
+
+
public String getOutput() {
+
return output.toString();
+
}
+
}
+
</pre>
+
  
The only transformation method that can be used externally (when this program is called by a transformation program) is '<tt>public void transform(RecordType record, VariableType xslt, RecordType outRecord)</tt>'. The other '<tt>transform</tt>' method as well as the '<tt>compileXSLT</tt>' method are intended to be used internally by other programs which call GXSLT_Rec2Rec during their execution.
+
The gDTS has to perform some procedures in order to fetch and store content. These procedures are totally independent from the basic functionality of the gDTS which is to transform one or more objects into different content formats and they shall not affect it by any means. So whenever the gDTS is invoked, the caller-supplied data is automatically wrapped in a data source object. In a similar way, the output of the transformation is wrapped in a data sink object. The source and sink objects can then be used by the invoked java program in order to read each source object sequentially and write its transformed counterpart to the destination. This processing of data objects is done homogenously because of the abstraction provided by the data sources and data sinks, no matter what the nature of the original source and destination is.
  
This is the XML definition of the transformation program:
+
The clients identify the appropriate data handler by its name in the input/output type parameter contained in each transform method of gDTS. Then, the service loads dynamically the java class of the data handler that corresponds to this type.
  
<pre>
+
The available Data Handlers are:
<?xml version="1.0" encoding="UTF-8"?>
+
<TransformationProgram>
+
<Input name="TPInput">
+
<Schema isVariable="true" />
+
<Language isVariable="true" />
+
<Type>record</Type>
+
<Reference isVariable="true" />
+
</Input>
+
<Variable name="XSLT" />
+
<Output name="TPOutput">
+
<Schema isVariable="true" />
+
<Language isVariable="true" />
+
<Type>record</Type>
+
</Output>
+
<TransformationRule>
+
<Definition>
+
<Transformer>org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.GXSLT_Rec2Rec.GXSLT_Rec2Rec</Transformer>
+
<Input name="Rule1Input1">
+
<Schema isVariable="true"> //Input[@name='TPInput']/Schema </Schema>
+
<Language isVariable="true"> //Input[@name='TPInput']/Language </Language>
+
<Type>record</Type>
+
<Reference isVariable="true"> //Input[@name='TPInput']/Reference </Reference>
+
</Input>
+
<Input name="Rule1Input2">
+
<Schema />
+
<Language />
+
<Type>variable</Type>
+
<Reference isVariable="true"> //Variable[@name='XSLT'] </Reference>
+
</Input>
+
<Output name="TPRule1Output">
+
<Reference>//Output[@name='TPOutput']</Reference>
+
</Output>
+
</Definition>
+
</TransformationRule>
+
</TransformationProgram>
+
</pre>
+
  
In this example, the transformation program defined above is stored in the DIS as a profile with UniqueID=<tt>ce6b9860-ebfe-11db-8b69-dd428ed9686d</tt>. The input record that is going to be transformed is stored in a local file named <tt>input.xml</tt>, and the XSLT that will be used is stored as a generic resource with UniqueID=<tt>ed358e00-23f2-11dc-a35f-9c01d805f283</tt> in the DIS. The following code fragment reads the input record from the file, creates a set of parameters which are used in order to assign the input data and the XSLT ID to the respective transformation program variable inputs, and then invokes the <tt>transform</tt> operation of the metadata broker service. The result is written to the console. The URI of the remote service is given as a command-line argument.
+
===== Data Sources =====
 +
<table border="1" >
 +
<tr style="white-space: nowrap; text-align: left;"><th>Data Source Name</th><th>Input Name</th><th>Input Value</th><th>Input Parameters</th><th>Description</th></tr>
 +
<tr><td>TMDataSource</td><td>TMDataSource</td><td>content collection id</td><td>NA</td><td>Fetches all the trees that belong to a tree collection.</td></tr>
 +
<tr><td>RSBlobDataSource</td><td>RSBlob</td><td>result set locator</td><td>NA</td><td>Gets as input content of a result set with blob elements.</td></tr>
 +
<tr><td>FTPDataSource</td><td>FTP</td><td>host name</td><td>username, password, directory, port</td><td>Downloads content from an ftp server.</td></tr>
 +
<tr><td>URIListDataSource</td><td>URIList</td><td>url</td><td>NA</td><td>Fetches content from urls that are contained in a file whose location is set as input value.</td></tr>
 +
</table>
  
<pre>
+
===== Data Sinks =====
public class Client {
+
public static void main(String[] args) {
+
try {
+
// Get the broker service porttype
+
EndpointReferenceType endpoint = new EndpointReferenceType();
+
endpoint.setAddress(new Address(args[0]));
+
MetadataBrokerPortType broker = new MetadataBrokerServiceAddressingLocator().getMetadataBrokerPortTypePort(endpoint);
+
  
// Read the input data file into a string
+
<table border="1" >
String inputData = readTextFile("input.xml");
+
<tr style="white-space: nowrap; text-align: left;"><th>Data Sink Name</th><th>Output Name</th><th>Output Value</th><th>Output Parameters</th><th>Description</th></tr>
+
<tr><td>RSBlobDataSink</td><td>RSBlob</td><td>NA</td><td>NA</td><td>Puts data into a result set with blob elements.</td></tr>
// Create a set of transformation parameters, assigning values to variables
+
<tr><td>RSXMLDataSink</td><td>RSXML</td><td>NA</td><td>NA</td><td>Puts (xml) data into a result set with xml elements.</td></tr>
// defined in the transformation program
+
<tr><td>FTPDataSink</td><td>FTP</td><td>host name</td><td>username, password, port, directory</td><td>Stores objects in an ftp server.</td></tr>
TransformationParameters tparams = TransformationParameters.newInstance();
+
</table>
tparams.addParameter("//Input[@name='TPInput']/Schema", "Schema1=URI1");
+
tparams.addParameter("//Input[@name='TPInput']/Language", "en");
+
tparams.addParameter("//Input[@name='TPInput']/Reference", inputData);
+
tparams.addParameter("//Output[@name='TPOutput']/Schema", "Schema2=URI2");
+
tparams.addParameter("//Output[@name='TPOutput']/Language", "en");
+
tparams.addParameter("//Variable[@name='XSLT']", "ed358e00-23f2-11dc-a35f-9c01d805f283");
+
  
// Prepare the invocation parameters
+
===== Data Bridges =====
TransformWithNewTP params = new TransformWithNewTP();
+
params.setTransformationProgramID("ce6b9860-ebfe-11db-8b69-dd428ed9686d");
+
params.setParameters(tparams.getAsString());
+
  
// Invoke the remote operation and write the result to the console
+
<table border="1" >
System.out.println(broker.transform(params));
+
<tr style="white-space: nowrap; text-align: left;"><th>Data Bridge Name</th><th>Parameters</th><th>Description</th></tr>
} catch (Exception e) {
+
<tr><td>RSBlobDataBridge</td><td>NA</td><td>RSBlobDataBridge is used as a buffer of data elements. Utilizes RS in order to keep objects in the disk.</td></tr>
e.printStackTrace();
+
<tr><td>REFDataBridge</td><td>flowControled = "true|false", limit</td><td>Keeps references to data elements. If flow control is enabled a maximum number of #limit data elements can exist in the bridge.</td></tr>
}
+
<tr><td>FilterDataBridge</td><td>NA</td><td>Filters the contents of a Data Source by a content format.</td></tr>
}
+
 
+
</table>
private static String readTextFile(String filename) throws IOException {
+
 
BufferedReader br = new BufferedReader(new FileReader(filename));
+
==== Data Transformation Programs ====
StringBuffer buf = new StringBuffer();
+
 
String tmp;
+
The available transformations that the gDTS can use reside externally to the service, as separate Java classes called Programs (not to be confused with ‘Transformation Programs’). Each program is an independent, self-describing entity that encapsulates the logic of the transformation process it performs. The gDTS loads these required programs dynamically as the execution proceeds and supplies them with the input data that must be transformed. Since the loading is done at run-time, extending the gDTS transformation capabilities by adding programs is a trivial task. The new program has to be written as a java class and referenced in the classpath variable, so that it can be located when required.
while ((tmp = br.readLine()) != null) {
+
 
buf.append(tmp + "\n");
+
The gDTS provides helper functionality to simplify the creation of new programs.  This functionality is exposed to the program author through a set of abstract java classes, which are included in the gCube Data Transformation Library.
}
+
 
br.close();
+
The available Program implementations are:
return buf.toString();
+
 
}
+
<table border="1" >
}
+
<tr style="white-space: nowrap; text-align: left;"><th>Name</th><th>Description</th></tr>
</pre>
+
<tr><td>DocToTextTransformer</td><td>Extacts plain text from msword documents</td></tr>
 +
<tr><td>ExcelToTextTransformer</td><td>Extacts plain text from ms-excel documents</td></tr>
 +
<tr><td>FtsRowset_Transformer</td><td>Creates full text rowsets from xml documents</td></tr>
 +
<tr><td>FwRowset_Transformer</td><td>Creates forward rowsets from xml documents</td></tr>
 +
<tr><td>GeoRowset_Transformer</td><td>Creates geo rowsets from xml documents</td></tr>
 +
<tr><td>ImageMagickWrapperTP</td><td>Currently is able to convert images from to any image type, create thumbnails, watermarking images. Any other operation of image magick library can be incorporated</td></tr>
 +
<tr><td>PDFToJPEGTransformer</td><td>Creates jpeg images from a page of a pdf document</td></tr>
 +
<tr><td>PDFToTextHTMLTransformer</td><td>Converts a pdf document to html or text</td></tr>
 +
<tr><td>PPTToTextTransformer</td><td>Extacts plain text from powerpoint documents</td></tr>
 +
<tr><td>TextToFtsRowset_Transformer</td><td>Creates full text rowsets from plain text</td></tr>
 +
<tr><td>XSLT_Transformer</td><td>Applies an xslt to an xml document</td></tr>
 +
<tr><td>AggregateFTS_Transformer</td><td>Transform metadata documents coming from multiple metadata collections to a single FTS rowset</td></tr>
 +
<tr><td>AggregateFWD_Transformer</td><td>Transform metadata documents coming from multiple metadata collections to a single FWD rowset</td></tr>
 +
<tr><td>Zipper</td><td>Zips single or multi part files</td></tr>
 +
<tr><td>GnuplotWrapperTP</td><td>Creates a plot descibed by the gnuplot script</td></tr>
 +
<tr><td>GraphvizWrapperTP</td><td>Creates a graph using Graphviz library</td></tr>
 +
</table>
 +
 
 +
=== Client Library ===
 +
==== Maven coordinates ====
 +
<dependency>
 +
<groupId>org.gcube.data-transformation</groupId>
 +
<artifactId>dts-client-library</artifactId>
 +
<version>...</version>
 +
</dependency>
  
==== Transforming an entire ResultSet using a XSLT ====
+
==== Creating full text rowsets from tree collection ====
  
This is the definition of the <tt>GXSLT_RS2RS</tt> class (included in the metadata broker library), which performs the actual conversion:
+
The first example demonstrates how it is possible to create full text rowsets from a tree collection. In the input field of the request we set as input type the content collection data source input type which is ''TMDataSource'' and as value the tree collection id (see [[Data_Transformation#Data_Sources | Data Sources]]). In the output field is specified that the result of the transformation will be appended into a result set which is created by the data sink and returned in the response (see [[Data_Transformation#Data_Sources | Data Sinks]]). Finally, the transformation procedure of DTS that is used in this example ''transformData'' is able to identify by itself the appropriate transformation units that will be used to transform the input data to the target content type ''<nowiki>text/xml, schemaURI="http://ftrowset.xsd"</nowiki>''. The target content type is specified in the respective request parameter.
  
 
<pre>
 
<pre>
package org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.GXSLT_RS2RS;
+
import gr.uoa.di.madgik.grs.record.GenericRecord;
 +
import gr.uoa.di.madgik.grs.record.field.StringField;
 +
import static org.gcube.data.streams.dsl.Streams.convert;
  
import java.rmi.RemoteException;
+
import java.net.URI;
 +
import java.net.URISyntaxException;
 +
import java.util.Arrays;
 +
import java.util.concurrent.TimeUnit;
  
import org.apache.log4j.Logger;
+
import org.gcube.common.clients.ClientRuntime;
import org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.Program;
+
import org.gcube.common.scope.api.ScopeProvider;
import org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.ResultSetType;
+
import org.gcube.data.streams.Stream;
import org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.VariableType;
+
import org.gcube.datatransformation.client.library.beans.Types.*;
import org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.GXSLT_RS2RS.GXSLT_RS2RS_Worker;
+
import org.gcube.datatransformation.client.library.exceptions.DTSException;
import org.diligentproject.searchservice.searchlibrary.rsclient.elements.RSResourceWSRFType;
+
import org.gcube.datatransformation.client.library.proxies.DTSCLProxyI;
import org.diligentproject.searchservice.searchlibrary.rswriter.RSXMLWriter;
+
import org.gcube.datatransformation.client.library.proxies.DataTransformationDSL;
  
public class GXSLT_RS2RS implements Program {
+
public class DTSClient_CreateFTRowsetFromContent {
private static Logger log = Logger.getLogger(GXSLT_RS2RS.class);
+
private String output = null;
+
  
public void transform(ResultSetType RS, VariableType xslt, ResultSetType outRS) throws RemoteException {
+
public static void main(String[] args) throws Exception {
try {
+
String scope = args[0];
RSXMLWriter writer = RSXMLWriter.getRSXMLWriter();
+
String id = args[1];
new GXSLT_RS2RS_Worker(RS, writer, xslt).start();
+
ScopeProvider.instance.set(scope);
output = writer.getRSLocator(new RSResourceWSRFType()).getLocator();
+
DTSCLProxyI proxyRandom = DataTransformationDSL.getDTSProxyBuilder().build();
} catch (Exception e) {
+
log.error("GXSLT_RS2RS: Failed to create writer for output resultset.", e);
+
throw new RemoteException("GXSLT_RS2RS: Failed to create writer for output resultset.", e);
+
}
+
}
+
  
public String getOutput() {
+
TransformDataWithTransformationUnit request = new TransformDataWithTransformationUnit();
return this.output;
+
request.tpID = "$FtsRowset_Transformer";
}
+
request.transformationUnitID = "6";
  
}
+
/* INPUT */
</pre>
+
Input input = new Input();
 +
input.inputType = "TMDataSource";
 +
input.inputValue = id;
 +
request.inputs = Arrays.asList(input);
  
As stated before, bulk transformations are non-blocking. For this reason, the above code spawns a new thread to handle the transformation process. The definition of the <tt>GXSLT_RS2RS_Worker</tt> class (which extends the <tt>Thread</tt> class) follows.
+
/* OUTPUT */
 +
request.output = new Output();
 +
request.output.outputType = "RS2";
  
<pre>
+
/* TARGET CONTENT TYPE */
package org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.GXSLT_RS2RS;
+
request.targetContentType = new ContentType();
 +
request.targetContentType.mimeType = "text/xml";
 +
Parameter param = new Parameter("schemaURI", "http://ftrowset.xsd");
 +
 +
request.targetContentType.parameters = Arrays.asList(param);
  
import javax.xml.transform.Templates;
+
/* PROGRAM PARAMETERS */
import org.apache.log4j.Logger;
+
Parameter xsltParameter1 = new Parameter("xslt:1", "$BrokerXSLT_DwC_anylanguage_to_ftRowset_anylanguage");
import org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.ResultSetType;
+
Parameter xsltParameter2 = new Parameter("xslt:2", "$BrokerXSLT_Properties_anylanguage_to_ftRowset_anylanguage");
import org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.VariableType;
+
Parameter xsltParameter3 = new Parameter("xslt:3", "$BrokerXSLT_PROVENANCE_anylanguage_to_ftRowset_anylanguage");
import org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.GXSLT_Rec2Rec.GXSLT_Rec2Rec;
+
Parameter xsltParameter4 = new Parameter("finalftsxslt", "$BrokerXSLT_wrapperFT");
import org.diligentproject.searchservice.searchlibrary.resultset.elements.ResultElementGeneric;
+
import org.diligentproject.searchservice.searchlibrary.rsclient.elements.RSLocator;
+
Parameter indexTypeParameter = new Parameter("indexType", "ft_2.0");
import org.diligentproject.searchservice.searchlibrary.rsclient.elements.RSResourceLocalType;
+
import org.diligentproject.searchservice.searchlibrary.rsreader.RSXMLIterator;
+
import org.diligentproject.searchservice.searchlibrary.rsreader.RSXMLReader;
+
import org.diligentproject.searchservice.searchlibrary.rswriter.RSXMLWriter;
+
  
class GXSLT_RS2RS_Worker extends Thread {
+
request.tProgramUnboundParameters = Arrays.asList(xsltParameter1, xsltParameter2, xsltParameter3, xsltParameter4, indexTypeParameter);
private static Logger log = Logger.getLogger(GXSLT_RS2RS_Worker.class);
+
private ResultSetType RS;
+
private RSXMLWriter writer;
+
private VariableType xslt;
+
  
public GXSLT_RS2RS_Worker(ResultSetType resultSet, RSXMLWriter RSWriter, VariableType xsltToUse) {
+
request.filterSources = false;
RS = resultSet;
+
request.createReport = false;
writer = RSWriter;
+
xslt = xsltToUse;
+
}
+
  
public void run() {
+
TransformDataWithTransformationUnitResponse response = null;
String element = null;
+
int i = 0;
+
+
 
try {
 
try {
/* Compile the XSLT so that the records in the resultset will be transformed faster */
+
response = proxyRandom.transformDataWithTransformationUnit(request);
                        String xsltdef = GenericResourceRetriever.retrieveGenericResource(xslt.getReference());
+
} catch (DTSException e) {
Templates compiledXSLT = GXSLT_Rec2Rec.compileXSLT(xsltdef);
+
e.printstacktrace();
 
+
/* Read each record of the input ResultSet, use the GXSLT_Rec2Rec program in order to transform
+
* it and add it to the output ResultSet. */
+
GXSLT_Rec2Rec GXSLTRecProgram = new GXSLT_Rec2Rec();
+
RSXMLReader reader = RSXMLReader.getRSXMLReader(new RSLocator(RS.toString()));
+
RSXMLIterator iter = reader.makeLocalPatiently(new RSResourceLocalType(), 1200000).getRSIterator(1200000);
+
while(iter.hasNext()) {
+
if (writer.isTimerAlive())
+
writer.resetTimer();
+
ResultElementGeneric elem = (ResultElementGeneric)iter.next(ResultElementGeneric.class);
+
if (elem == null)
+
continue;
+
element = elem.getPayload();
+
GXSLTRecProgram.transform(element, compiledXSLT);
+
writer.addResults(new ResultElementGeneric(elem.getRecordAttributes(ResultElementGeneric.RECORD_ID_NAME)[0].getAttrValue(),
+
elem.getRecordAttributes(ResultElementGeneric.RECORD_COLLECTION_NAME)[0].getAttrValue(),
+
GXSLTRecProgram.getOutput()));
+
i++;
+
element = null;
+
}
+
writer.close();
+
} catch (Exception e) {
+
if (element != null)
+
i++;
+
log.error("GXSLT_RS2RS: Failed to transform the given resultset. Stopped at element " + String.valueOf(i) + ":\n" + element, e);
+
e.printStackTrace();
+
try {
+
writer.close();
+
} catch (Exception e1) {
+
log.error("GXSLT_RS2RS: Failed to close resultset.");
+
}
+
 
}
 
}
 +
String output = response.output;
 
}
 
}
 
}
 
}
 
</pre>
 
</pre>
  
The above code uses the <tt>GXSLT_Rec2Rec</tt> program to compile the XSLT so that the transformation executes as fast as possible. Then it iterates over the whole set of elements contained in the ResultSet, transforming each one using the compiled XSLT. Each transformed element is then added to the output ResultSet.
+
==== Creating forward rowsets from tree collection ====
  
The following is the XML definition of the transformation program used for this type of transformation.
+
The first example demonstrates how it is possible to create forward rowsets from a tree collection. In the input field of the request we set as input type the content collection data source input type which is ''TMDataSource'' and as value the tree collection id (see [[Data_Transformation#Data_Sources | Data Sources]]). In the output field is specified that the result of the transformation will be appended into a result set which is created by the data sink and returned in the response (see [[Data_Transformation#Data_Sources | Data Sinks]]). Finally, the transformation procedure of DTS that is used in this example ''transformData'' is able to identify by itself the appropriate transformation units that will be used to transform the input data to the target content type ''<nowiki>text/xml, schemaURI="http://fwrowset.xsd"</nowiki>''. The target content type is specified in the respective request parameter.
  
 
<pre>
 
<pre>
<TransformationProgram>
+
import java.net.URI;
<Input name="TPInput">
+
import java.net.URISyntaxException;
<Schema isVariable="true" />
+
import java.util.Arrays;
<Language isVariable="true" />
+
<Type>resultset</Type>
+
<Reference isVariable="true" />
+
</Input>
+
<Variable name="XSLT" />
+
<Output name="TPOutput">
+
<Schema isVariable="true" />
+
<Language isVariable="true" />
+
<Type>resultset</Type>
+
</Output>
+
<TransformationRule>
+
<Definition>
+
<Transformer>org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.GXSLT_RS2RS.GXSLT_RS2RS</Transformer>
+
<Input name="Rule1Input1">
+
<Schema isVariable="true"> //Input[@name='TPInput']/Schema </Schema>
+
<Language isVariable="true"> //Input[@name='TPInput']/Language </Language>
+
<Type>resultset</Type>
+
<Reference isVariable="true"> //Input[@name='TPInput']/Reference </Reference>
+
</Input>
+
<Input name="Rule1Input2">
+
<Schema />
+
<Language />
+
<Type>variable</Type>
+
<Reference isVariable="true"> //Variable[@name='XSLT'] </Reference>
+
</Input>
+
<Output name="TPRule1Output">
+
<Reference>//Output[@name='TPOutput']</Reference>
+
</Output>
+
</Definition>
+
</TransformationRule>
+
</TransformationProgram>
+
</pre>
+
  
In this example, the transformation program defined above is stored in the DIS as a profile with UniqueID=<tt>eb46fc40-ebfe-11db-8b6b-dd428ed9686d</tt>. The EPR of the input ResultSet that is going to be transformed is stored in a local file named <tt>input.xml</tt>, and the XSLT is the same used in the previous example. The URI of the remote service is given as a command-line argument. The client code that invokes the broker service and performs the transformation is the same as in the previous example. The only thing that changes is the ID of the transformation program that is called, which should be set to <tt>eb46fc40-ebfe-11db-8b6b-dd428ed9686d</tt>.
+
import org.gcube.common.clients.ClientRuntime;
 +
import org.gcube.common.scope.api.ScopeProvider;
 +
import org.gcube.datatransformation.client.library.beans.Types.*;
 +
import org.gcube.datatransformation.client.library.exceptions.DTSException;
 +
import org.gcube.datatransformation.client.library.proxies.DTSCLProxyI;
 +
import org.gcube.datatransformation.client.library.proxies.DataTransformationDSL;
  
==== Using a transformation program within another transformation program ====
+
public class DTSClient_CreateFWRowsetFromContent {
  
As stated before, whole transformation programs can be used as 'black-box' components inside another transformation program. This can be done by defining a transformation rule which describes the call to the second transformation program.
+
public static void main(String[] args) throws Exception {
 +
String scope = args[0];
 +
String id = args[1];
 +
ScopeProvider.instance.set(scope);
 +
DTSCLProxyI proxyRandom = DataTransformationDSL.getDTSProxyBuilder().build();
  
The transformation program that will be called from another transformation program in this example is defined below.
+
TransformDataWithTransformationUnit request = new TransformDataWithTransformationUnit();
 +
request.tpID = "$FwRowset_Transformer";
 +
request.transformationUnitID = "1";
  
<pre>
+
/* INPUT */
<TransformationProgram>
+
Input input = new Input();
    <Input name="TPInput">
+
input.inputType = "TMDataSource";
        <Schema>SCH1=http://schema1.xsd</Schema>
+
input.inputValue = id;
        <Language>en</Language>
+
request.inputs = Arrays.asList(input);
        <Type>resultset</Type>
+
        <Reference isVariable="true" />
+
    </Input>
+
    <Output name="TPOutput">
+
        <Schema>SCH3=http://schema3.xsd</Schema>
+
        <Language>en</Language>
+
        <Type>resultset</Type>
+
    </Output>
+
    <TransformationRule>
+
        <Definition>
+
            <Transformer>org.diligentproject.program2</Transformer>
+
            <Input name="Rule2Input">
+
                <Schema isVariable="true">//Output[@name='TPRule1Output']/Definition/Schema</Schema>
+
                <Language isVariable="true">//Output[@name='TPRule1Output']/Definition/Language</Language>
+
                <Type>resultset</Type>
+
                <Reference isVariable="true">//Output[@name='TPRule1Output']/Definition/Reference</Reference>
+
            </Input>
+
            <Output name="Rule2Output">
+
                <Reference>//Output[@name='TPOutput']</Reference>
+
            </Output>
+
        </Definition>
+
    </TransformationRule>
+
</TransformationProgram>
+
</pre>
+
  
The input and output schemas and languages are predefined inside this transformation program, so the only thing that should be specified at run-time is the actual input data reference. Let's say that this transformation program is stored in the DIS and its UniqueID is <tt>910e0710-f251-11db-88f9-f971eaf0d653</tt>.
+
/* OUTPUT */
 +
request.output = new Output();
 +
request.output.outputType = "RS2";
  
The transformation program that uses the above transformation program is defined below.
+
/* TARGET CONTENT TYPE */
 +
request.targetContentType = new ContentType();
 +
request.targetContentType.mimeType = "text/xml";
 +
Parameter param = new Parameter("schemaURI", "http://fwrowset.xsd");
 +
 +
request.targetContentType.parameters = Arrays.asList(param);
  
<pre>
+
/* PROGRAM PARAMETERS */
<TransformationProgram>
+
Parameter xsltParameter1 = new Parameter("xslt:1", "$BrokerXSLT_DwC_anylanguage_to_fwRowset_anylanguage");
    <Input name="TPInput">
+
Parameter xsltParameter2 = new Parameter("finalfwdxslt", "$BrokerXSLT_wrapperFWD");
        <Schema>SCH1=http://schema1.xsd</Schema>
+
        <Language>en</Language>
+
request.tProgramUnboundParameters = Arrays.asList(xsltParameter1, xsltParameter2);
        <Type>resultset</Type>
+
        <Reference isVariable="true" />
+
    </Input>
+
    <Variable name="var1"/>
+
    <Output name="TPOutput">
+
        <Schema>SCH2=http://schema2.xsd</Schema>
+
        <Language>en</Language>
+
        <Type>resultset</Type>
+
    </Output>
+
    <TransformationRule>
+
        <Reference>
+
            <Program>910e0710-f251-11db-88f9-f971eaf0d653</Program>
+
            <Value isVariable="true" target="//Input[@name='TPInput']/Reference">//Input[@name='TPInput']/Reference</Value>
+
            <Output name="Rule1Output" />
+
        </Reference>
+
    </TransformationRule>
+
    <TransformationRule>
+
        <Definition>
+
            <Transformer>org.diligentproject.program1</Transformer>
+
            <Input name="Rule2Input1">
+
                <Schema isVariable="true">//Output[@name='TPRule1Output']/Definition/Schema</Schema>
+
                <Language isVariable="true">//Output[@name='TPRule1Output']/Definition/Language</Language>
+
                <Type>resultset</Type>
+
                <Reference isVariable="true">//Output[@name='TPRule1Output']/Definition/Reference</Reference>
+
            </Input>
+
            <Input name="Rule2Input2">
+
                <Schema />
+
                <Language />
+
                <Type>variable</Type>
+
                <Reference isVariable="true"> //Variable[@name='var1'] </Reference>
+
            </Input>
+
            <Output name="Rule2Output">
+
                <Reference>//Output[@name='TPOutput']</Reference>
+
            </Output>
+
        </Definition>
+
    </TransformationRule>
+
</TransformationProgram>
+
</pre>
+
  
The element that describes the call to the first transformation program is the first <tt>TransformationRule</tt> element. This element specifies the UniqueID of the transformation program to be called, as well as a mapping of values to the variable inputs of that transformation program. Since the first transformation program contains only one variable input (the input data reference), there is only one mapping in this example, described by a <tt>Value</tt> element. The <tt>target</tt> attribute of this element specifies the target element of the other transformation program whose value is to be set, and the element's content specifies the actual value to set. In this example, this is not a literal value but a reference to another element of the transformation program, where the value should be taken from. Specifically, we have specified that the first transformation program's input should be the input of the second transformation program. Since the value of the <tt>Value</tt> element is a XPath expression, the <tt>isVariable</tt> attribute is also set to <tt>true</tt>, meaning that the content should be interpreted as a reference to another element and not as a literal value.
+
request.filterSources = false;
The output of the first transformation program becomes the output of the transformation rule that called it, and is named <tt>Rule1Output</tt>. This output is then used as the input of the next transformation rule.
+
request.createReport = false;
  
==== Finding a set of transformation programs given a source and target metadata formats ====
+
TransformDataWithTransformationUnitResponse response = null;
 +
try {
 +
response = proxyRandom.transformDataWithTransformationUnit(request);
 +
} catch (DTSException e) {
 +
e.printStackTrace();
 +
}
 +
String output = response.output;
 +
}
 +
}
 +
</pre>
  
This example demonstrates how one can get an array of transformation programs that could be used in order to transform metadata from a given source format to a given target format. The operation that can be used in order to accomplish this is '<tt>findPossibleTransformationPrograms</tt>'. The caller must specify a source and target metadata format and the service searches for possible "chains" of existing transformation programs that could be used in order to carry out the transformation. There are three rules imposed by the Metadata Broker service:
+
==== Finding applicable transformation units ====
* Only transformation programs with one data input are considered during the search
+
* Each transformation program can be used at most one time inside each chain of transformation programs (this is needed in order to avoid infinite loops)
+
* A transformation program that produces a collection as its output can only be the last one inside a chain of transformation programs
+
  
Each chain composed by the Metadata Broker service is converted to a transformation program, which "links" the individual transformation programs forming the chain. This transformation program contains a transformation rule for each transformation program in the chain. Each transformation rule describes a call to the corresponding transformation program. The result of the operation is an array of strings, where each string corresponds to a synthesized transformation program.
+
This example demonstrates how it is possible to search for transformation units that are able to perform a transformation from a source to a target content type. In this example we are trying to find one or more transformation units that can transform a gif image to jpeg format.
  
It is possible that some of the transformation programs included in a chain contain some input variables. For each found variable, the Metadata Broker service places a variable to the synthesized transformation program, and this variable is mapped to the original one. This way one can specify the values of the variables contained in every transformation program involved in the chain, by specifying the values of the corresponding variables of the synthesized transformation program. This mechanism is necessary because the individual transformation programs contained in the chain are not visible to the caller. The only entity that the caller sees is the synthesized transformation program that is responsible for calling the ones it is built from.
+
<pre>
 +
import org.gcube.common.clients.ClientRuntime;
 +
import org.gcube.common.scope.api.ScopeProvider;
 +
import org.gcube.datatransformation.client.library.beans.Types.*;
 +
import org.gcube.datatransformation.client.library.exceptions.DTSException;
 +
import org.gcube.datatransformation.client.library.proxies.DTSCLProxyI;
 +
import org.gcube.datatransformation.client.library.proxies.DataTransformationDSL;
  
Consider the case where a transformation program whose output language is a variable is added to a chain. When the service searches for another transformation program to append to the chain after that one, it may find a transformation program whose input language is 'en' (English). Then, the value 'en' will be assigned to the variable field describing the previous transformation program's output language. The same happens if an output field (schema or language) of a transformation program contains a specific value and the corresponding input field of the next transformation program is a variable. But what happens if the two fields are both variables? In this case, an input variable is added to the synthesized transformation program. When the caller uses this transformation program, he/she will need to specify a value for this variable. That value will then be assigned automatically both to the output field of the first transformation program and to the input field of the second transformation program.
+
public class FindApplicableTransformationUnitsClient {
 +
 +
public static void main(String[] args) throws Exception {
 +
ScopeProvider.instance.set(args[0]);
 +
DTSCLProxyI proxyRandom = DataTransformationDSL.getDTSProxyBuilder().build();
  
Now let's see how one can call the '<tt>findPossibleTransformationPrograms</tt>' operation:
+
FindApplicableTransformationUnits request = new FindApplicableTransformationUnits();
 +
 +
request.sourceContentType = new ContentType();
 +
request.sourceContentType.mimeType = "image/gif";
  
<pre>
+
request.targetContentType = new ContentType();
import org.apache.axis.message.addressing.Address;
+
request.targetContentType.mimeType = "image/jpeg";
import org.apache.axis.message.addressing.EndpointReferenceType;
+
import org.diligentproject.metadatamanagement.metadatabrokerlibrary.programs.TPIOType;
+
import org.diligentproject.metadatamanagement.metadatabrokerservice.stubs.FindPossibleTransformationProgramsResponse;
+
import org.diligentproject.metadatamanagement.metadatabrokerservice.stubs.MetadataBrokerPortType;
+
import org.diligentproject.metadatamanagement.metadatabrokerservice.stubs.FindPossibleTransformationPrograms;
+
import org.diligentproject.metadatamanagement.metadatabrokerservice.stubs.service.MetadataBrokerServiceAddressingLocator;
+
  
public class TestFindPossibleTPs {
+
request.createAndPublishCompositeTP = false;
+
FindApplicableTransformationUnitsResponse output = null;
public static void main(String[] args) {
+
 
try {
 
try {
// Create endpoint reference to the service
+
output = proxyRandom.findApplicableTransformationUnits(request);
EndpointReferenceType endpoint = new EndpointReferenceType();
+
} catch (DTSException e) {
endpoint.setAddress(new Address(args[0]));
+
e.printstacktrace();
MetadataBrokerPortType broker = new MetadataBrokerServiceAddressingLocator().getMetadataBrokerPortTypePort(endpoint);
+
}
+
 
// Create the IO format descriptors
+
for(TPAndTransformationUnit tr : output.TPAndTransformationUnitIDs) {
TPIOType inFormat = TPIOType.fromParams(args[1], args[2], args[3], "");
+
System.out.println(tr.transformationProgramID + " " + tr.transformationUnitID);
TPIOType outFormat = TPIOType.fromParams(args[4], args[5], args[6], "");
+
+
// Prepare the invocation parameters
+
FindPossibleTransformationPrograms params = new FindPossibleTransformationPrograms();
+
params.setInputFormat(inFormat.toXMLString());
+
params.setOutputFormat(outFormat.toXMLString());
+
+
// Invoke the remote operation
+
FindPossibleTransformationProgramsResponse resp = broker.findPossibleTransformationPrograms(params);
+
String[] TPs = resp.getTransformationProgram();
+
for (String TP : TPs) {
+
System.out.println(TP);
+
System.out.println();
+
}
+
+
} catch (Exception e) {
+
e.printStackTrace();
+
 
}
 
}
 
}
 
}
 
}
 
}
 
</pre>
 
</pre>
 
This code fragment assumes the following:
 
* args[0] = the Metadata Broker service URI
 
* args[1] = the source format type (''''''resultset'''''', ''''''collection'''''' or ''''''record'''''')
 
* args[2] = the source format language
 
* args[3] = the source format schema (in ''''''schemaName=schemaURI'''''' format)
 
* args[4] = the target format type (''''''resultset'''''', ''''''collection'''''' or ''''''record'''''')
 
* args[5] = the target format language
 
* args[6] = the target format schema (in ''''''schemaName=schemaURI'''''' format)
 
 
<br>First, an endpoint reference to the metadata broker service is created. Then, we have to create the source and target format descriptors. The remote operation accepts two strings describing the two metadata formats. These strings are nothing more that the serialized form of two '''''TPIOType''''' objects. The ''TPIOType'' class is the base class of the ''CollectionType'', ''ResultSetType'' and ''RecordType'' classes. This class defines the static method '''''fromParams''''' which creates and returns an object describing a metadata format based on given values for the format's schema, language, type and data reference. The returned object will be an instance of the correct class (derived from TPIOType), based on the given value for the 'type' attribute. Here, the 'reference' attribute is not used because we are interested in the metadata format itself and not in the data it describes. After constructing the two objects, we get their serialized form by calling the '''''toXMLString()''''' method on them. The returned strings are the ones that must be passed to the remote operation.
 
 
Next, we invoke the remote operation and then we just print the returned transformation programs.
 

Latest revision as of 14:01, 19 October 2016

gCube Data Transformation Service

Introduction

The gCube Data Transformation service is responsible for transforming content and metadata among different formats and specifications. gDTS lies on top of Content and Metadata Management services. It interoperates with these components in order to retrieve information objects and store the transformed ones. Transformations can be performed offline and on demand on a single object or on a group of objects.

gDTS employs a variety of pluggable converters in order to transform digital objects between arbitrary content types, and takes advantage of extended information on the content types to achieve the selection of the appropriate conversion elements.

As already mentioned, the main functionality of the gCube Data Transformation Service is to convert digital objects from one content format to another. The conversions will be performed by transformation programs which either have been previously defined and stored or are composed on-the-fly during the transformation process. Every transformation program (except for those which are composed on-the-fly) is stored in the IS.

The gCube Data Transformation Service is presumed to offer a lot of benefits in many aspects of gCube. Presentation layer benefits from the production of alternative representations of multimedia documents. Generation of thumbnails, transformations of objects to specific formats that are required by some presentation applications and projection of multimedia files with variable quality/bitrate are just some examples of useful transformations over multimedia documents. In addition, as conversion tool for textual documents, it will offer online projection of documents in html format and moreover any other downloadable formats as pdf or ps. Annotation UI can be implemented more straightforward on selected logical groups of content types (e.g. images) without caring about the details of the content and the support offered by the browsers. Finally, by utilizing the functionality of Metadata Broker, homogenization of metadata with variable schemas can be achieved.

Concepts

Content Type

In gDTS the content type identification and notation conforms to the MIME type specification described in RFC 2045 and RFC 2046. This provides compliance with mainstream applications such as browsers, mail clients, etc. In this context, a document’s content type is defined by the media type, the subtype identifier plus a set of parameters, specified in an “attribute=value” notation. This extra information is exploited by data converters capable of interpreting it.

Some examples of content formats of information objects are:

- Mimetype=”image/png”, width=”500”, height=”500” that denotes that the object’s format is the well known portable network graphics and the image’s width and height is 500 pixel.

- Mimetype=”text/xml”, schema=”dc”, language=”en”, denotes that the object is an xml document with schema Dublin Core and language English.

Transformation Units

A transformation unit describes the way a program can be used in order to perform a transformation from one or more source content type to a target content type. The transformation unit determines the behaviour of each program by providing proper program parameters potentially drawn from the content type representation. Program parameters may contain string literals and/or ranges (via wildcards) in order to denote source content types. In addition, the wildcard ‘-’ can be used to force the presence of a program parameter in the content types set by the caller which uses this specific transformation unit. Furthermore, transformation units may reference other transformation units and use them as “black-box” components in a transformation process. Thus, each transformation unit is identified by the pair (transformation program id, transformation unit id).

Transformations' Graph

Through transformation units new content types and program capabilities are published to gDTS. These units compose a transformation graph, with nodes and edges that correspond to content types and transformation units respectively. During initialisation, transformation graph is constructed locally through the published information stored in IS and gets updated periodically. Using this graph we are able to find a path of transformation units so as to perform an object transformation from its content type (source) to a target content type.

Transformation Programs

A transformation program is an xml document describing one or more possible transformations from a source content type to a target content type. Each transformation program references to at most one program and it contains one or more transformation units for each possible transformation. Transformation programs are stored in the IS.

Complex transformation processes are also described by transformation programs. Each transformation program can reference other transformation programs and use them as “black-box” components in the transformation process it defines. Each transformation program consists of:

  • One or more data input definitions. Each one defines the schema, language and type (record, ResultSet or collection) of the data that must be mapped to the particular input.
  • One or more input variables. Each one of them is placeholder for an additional string value which must be passed to the transformation program at run-time.
  • Exactly one data output definition, which contains the output data type (record, ResultSet or collection), schema and language.
  • One or more transformation rule definitions.

Note: The name of the input or output schema must be given in the format SchemaName=SchemaURI, where SchemaName is the name of the schema and SchemaURI is the URI of its definition, e.g. DC=http://dublincore.org/schemas/xmls/simpledc20021212.xsd.

Samples of transformation programs can be found on Creating Indices at the VO Level admin guide.

Transformation Rules

Transformation rules are the building block of transformation programs. Each transformation program always contains at least one transformation rule. Transformation rules describe simple transformations and execute in the order in which they are defined inside the transformation program. Usually the output of a transformation rule is the input of the next one. So, a transformation program can be thought of as a chain of transformation rules which work together in order to perform the complex transformation defined by the whole transformation program.

Each transformation rule consists of:

  • One or more data input definitions. Each definition contains the schema, language, type (record, ResultSet, collection or input variable) and data reference of the input it describes. Each one of these elements (except for the 'type' element) can be either a literal value, or a reference to another value defined inside the transformation program (using XPath syntax).
  • Exactly one data output, which can be:
    • A definition that contains the output data type (record, ResultSet or collection), schema and language.
    • A reference to the transformation program‘s output (using XPath syntax). This is the way to express that the output of this transformation rule will also be the output of the whole transformation program, so such a reference is only valid for the transformation program‘s final rule.
  • The name of the underlying program to execute in order to do the transformation, using standard 'packageName.className' syntax.

A transformation rule can also be a reference to another transformation program. This way, whole transformation programs can be used as parts of the execution of another transformation program. The reference can me made using the unique id of the transformation program being referenced and a set of value assignments to its data inputs and variables.

Note: The name of the input or output schema must be given in the format SchemaName=SchemaURI, where SchemaName is the name of the schema and SchemaURI is the URI of its definition, e.g. DC=http://dublincore.org/schemas/xmls/simpledc20021212.xsd.

Variable fields inside data input/output definitions

Inside the definition of data inputs and outputs of transformation programs and transformation rules, any field except for 'Type' can be declared as a variable field. Just like inputs variables, variable fields get their values by run-time assignments. In order to declare an element as a variable field of its parent element, one needs to include 'isVariable=true' in the element's definition. When the caller invokes a broker operation in order to transform some metadata, he/she can provide a set of value assignments to the input variables and variable fields of the transformation program definition. But the caller has access only to the variables of the whole transformation program, not the internal transformation rules. However, transformation rules can also contain variable fields in their input/output definitions. Since the caller cannot explicitly assign values to them, such variable fields must contain an XPath expression as their value, which points to another element inside the transformation program that contains the value to be assigned. These references are resolved when each transformation rule is executed, so if, for example, a variable field of a transformation rule's input definition points to a variable field of the previous transformation rule's output definition, it is guaranteed that the referenced element's value will be there at the time of execution of the second transformation rule. It is important to note that every XPath expression should specify an absolute location inside the document, which basically means it should start with '/'.

There is a special case where the language and schema fields of a transformation program's data input definition can be automatically get values assigned to them, without requiring the caller to do so. This can happen when the type of the particular data input is set to collection. In this case, the Metadata Broker Service automatically retrieves the format of the metadata collection described by the ID that is given through the Reference field of the data input definition and assigns the actual schema descriptor and language identifier of the collection to the respective variable fields of the data input definition. If any of these fields already contain values, these values are compared with the ones retrieved from the metadata collection's profile, and if they are different the execution of the transformation program stops and an exception is thrown by the Metadata Broker service. Note that the automatic value assignment works only on data inputs of transformation programs and NOT on data inputs of individual transformation rules.

Programs

A program (not to be confused with transformation program) is the Java class which performs the actual transformation on the input data. A transformation rule is just a XML description of the interface (inputs and output) of a program.

Each program can define any number of methods, but when the transformation rule which references it is executed, the service will use reflection in order to locate the correct method to call based on the input and output types defined in the transformation rule that initiates the call to the program's transformation method. The execution process is the following:

  • A client invokes DTS requesting the execution of a transformation program.
  • For each transformation rule found in the transformation program:
    • DTS reads the schema, language and type of the transformation rule's inputs, as well as the actual payloads given as inputs. The output format descriptor is also read.
    • Based on this information, DTS constructs one or more DataSource and a DataSink object, which are wrapper classes around the transformation rule's input and output descriptors.
    • The program to be invoked for the transformation is read from the transformation rule.
    • A transformation plan is constructed which is passed to workflowDTSAdaptor in order to construct an execution plan implementing this transformation.
    • DTS uses reflection in order to locate the transformation method to be called inside the program. This is done through the input and output descriptors of the transformation rule.

Generally speaking, the main logic in a program will be something like this:

  • while (source.hasNext()) do the following:
    • sourceElement = source.getNext();
    • (transform sourceElement to produce 'transformedPayload')
    • destElement = sink.getNewDataElement(sourceElement, transformedPayload);
    • sink.writeNext(destElement);
  • sink.finishedWriting();

Implementation Overview

The gCube Data Transformation Service is primarily comprised by the Data Transformation Service component which basically implements the WS-Interface of the Service, the Data Transformation Library which carries out the basic functionality of the Service i.e. the selection of the appropriate conversion element and the execution of the transformation over the information objects, a set of data handlers which are responsible to fetch and return/store the information objects and finally the conversion elements “Programs” that perform the conversions.

Data Transformation Service

The Data Transformation Service component implements the WS-Interface of the Service. Basically, it is the entry point for the services that Data Transformation Library provides. It's main operation is to check the parameters of the invocation, instantiate the data handlers, invoke the appropriate method of Data Transformation Library and inform clients for any possible faults.

A Data Transformation Service's RI operates successfully over multiple scopes by keeping any necessary information for each scope independently.

Data Transformation Library

Inside the Data Transformation Library it is implemented the core functionality of gDTS. The Data Transformation Library contains variable packages which are responsible for different features of gDTS. The basic class of the Data Transformation Library is the DTSCore which is the class responsible to orchestrate the rest of the components. A DTSCore instance contains a transformations graph which is responsible to determine the transformation that will be performed (if the transformation program / unit is not explicitly set), as well as an information manager (IManager) which is the class responsible to fetch information about the transformation programs. The implementation of IManager that is currently used by the gDTS is the ISManager which fetches, publishes and queries transformation programs from the gCore Based Information System.

The following diagram depicts the operations applied by Data Transformation Library on data elements when a request for a transformation is made and a transformation unit has not been explicitly set.

Data Transformation Operational Diagram

For each data element the Data Elements Broker reads its content type and by utilizing the transformations graph, determines the proper Transformation Unit that is going to perform the transformation. Each transformation path, consisted by Transformation Units, is added to the transformation plan that will be passed to WorkflowDTSAdaptor. Then each object which has the same content type is also ignored. If another object comes from the Data Source which has a different content type, the Data Elements Broker uses again the Transformations Graph and a new Transformation plan is created. If the graph does not manage to find any applicable transformation unit for a data element, it is just ignored as well as the rest which have the same content type.

Apart from the core functionality, in the Data Transformation Library are also contained the interfaces of the Programs and Data Handlers which have to be adopted by any program or data handler implementation. Finally, report and query packages contain classes for the reporting and querying functionalities respectively.

WorkflowDTSAdaptor

WorkflowDTSAdaptor takes a transformatin plan as an input, provided by Data Transformation Library. For each plan a new Execution Plan is instantiated as a transformation chain, described by the transformation plan. In this way, a different transformation chain is created for every content type of data elements provided by the source. Then transformation plan retrieves every data element of that particular content type and applies the transformation. A Data Bridge and Program instances are created and the data element is appended to the source Data Bridge of the Transformation Unit.

In the Transformation Unit then, the data elements are transformed one by one by the program and the result is appended into the target data bridge contained in the transformation unit. Then these objects are finally merged by the Data Source Merger which reads in parallel objects from all the transformation chains and appends them into the Data Sink.

Data Transformation Handlers

The gDTS has to perform some procedures in order to fetch and store content. These procedures are totally independent from the basic functionality of the gDTS which is to transform one or more objects into different content formats and they shall not affect it by any means. So whenever the gDTS is invoked, the caller-supplied data is automatically wrapped in a data source object. In a similar way, the output of the transformation is wrapped in a data sink object. The source and sink objects can then be used by the invoked java program in order to read each source object sequentially and write its transformed counterpart to the destination. This processing of data objects is done homogenously because of the abstraction provided by the data sources and data sinks, no matter what the nature of the original source and destination is.

The clients identify the appropriate data handler by its name in the input/output type parameter contained in each transform method of gDTS. Then, the service loads dynamically the java class of the data handler that corresponds to this type.

The available Data Handlers are:

Data Sources
Data Source NameInput NameInput ValueInput ParametersDescription
TMDataSourceTMDataSourcecontent collection idNAFetches all the trees that belong to a tree collection.
RSBlobDataSourceRSBlobresult set locatorNAGets as input content of a result set with blob elements.
FTPDataSourceFTPhost nameusername, password, directory, portDownloads content from an ftp server.
URIListDataSourceURIListurlNAFetches content from urls that are contained in a file whose location is set as input value.
Data Sinks
Data Sink NameOutput NameOutput ValueOutput ParametersDescription
RSBlobDataSinkRSBlobNANAPuts data into a result set with blob elements.
RSXMLDataSinkRSXMLNANAPuts (xml) data into a result set with xml elements.
FTPDataSinkFTPhost nameusername, password, port, directoryStores objects in an ftp server.
Data Bridges
Data Bridge NameParametersDescription
RSBlobDataBridgeNARSBlobDataBridge is used as a buffer of data elements. Utilizes RS in order to keep objects in the disk.
REFDataBridgeflowControled = "true|false", limitKeeps references to data elements. If flow control is enabled a maximum number of #limit data elements can exist in the bridge.
FilterDataBridgeNAFilters the contents of a Data Source by a content format.

Data Transformation Programs

The available transformations that the gDTS can use reside externally to the service, as separate Java classes called Programs (not to be confused with ‘Transformation Programs’). Each program is an independent, self-describing entity that encapsulates the logic of the transformation process it performs. The gDTS loads these required programs dynamically as the execution proceeds and supplies them with the input data that must be transformed. Since the loading is done at run-time, extending the gDTS transformation capabilities by adding programs is a trivial task. The new program has to be written as a java class and referenced in the classpath variable, so that it can be located when required.

The gDTS provides helper functionality to simplify the creation of new programs. This functionality is exposed to the program author through a set of abstract java classes, which are included in the gCube Data Transformation Library.

The available Program implementations are:

NameDescription
DocToTextTransformerExtacts plain text from msword documents
ExcelToTextTransformerExtacts plain text from ms-excel documents
FtsRowset_TransformerCreates full text rowsets from xml documents
FwRowset_TransformerCreates forward rowsets from xml documents
GeoRowset_TransformerCreates geo rowsets from xml documents
ImageMagickWrapperTPCurrently is able to convert images from to any image type, create thumbnails, watermarking images. Any other operation of image magick library can be incorporated
PDFToJPEGTransformerCreates jpeg images from a page of a pdf document
PDFToTextHTMLTransformerConverts a pdf document to html or text
PPTToTextTransformerExtacts plain text from powerpoint documents
TextToFtsRowset_TransformerCreates full text rowsets from plain text
XSLT_TransformerApplies an xslt to an xml document
AggregateFTS_TransformerTransform metadata documents coming from multiple metadata collections to a single FTS rowset
AggregateFWD_TransformerTransform metadata documents coming from multiple metadata collections to a single FWD rowset
ZipperZips single or multi part files
GnuplotWrapperTPCreates a plot descibed by the gnuplot script
GraphvizWrapperTPCreates a graph using Graphviz library

Client Library

Maven coordinates

	<dependency>
		<groupId>org.gcube.data-transformation</groupId>
		<artifactId>dts-client-library</artifactId>
		<version>...</version>
	</dependency>

Creating full text rowsets from tree collection

The first example demonstrates how it is possible to create full text rowsets from a tree collection. In the input field of the request we set as input type the content collection data source input type which is TMDataSource and as value the tree collection id (see Data Sources). In the output field is specified that the result of the transformation will be appended into a result set which is created by the data sink and returned in the response (see Data Sinks). Finally, the transformation procedure of DTS that is used in this example transformData is able to identify by itself the appropriate transformation units that will be used to transform the input data to the target content type text/xml, schemaURI="http://ftrowset.xsd". The target content type is specified in the respective request parameter.

import gr.uoa.di.madgik.grs.record.GenericRecord;
import gr.uoa.di.madgik.grs.record.field.StringField;
import static org.gcube.data.streams.dsl.Streams.convert;

import java.net.URI;
import java.net.URISyntaxException;
import java.util.Arrays;
import java.util.concurrent.TimeUnit;

import org.gcube.common.clients.ClientRuntime;
import org.gcube.common.scope.api.ScopeProvider;
import org.gcube.data.streams.Stream;
import org.gcube.datatransformation.client.library.beans.Types.*;
import org.gcube.datatransformation.client.library.exceptions.DTSException;
import org.gcube.datatransformation.client.library.proxies.DTSCLProxyI;
import org.gcube.datatransformation.client.library.proxies.DataTransformationDSL;

public class DTSClient_CreateFTRowsetFromContent {

	public static void main(String[] args) throws Exception {
		String scope = args[0];
		String id = args[1];
		ScopeProvider.instance.set(scope);
		DTSCLProxyI proxyRandom = DataTransformationDSL.getDTSProxyBuilder().build();

		TransformDataWithTransformationUnit request = new TransformDataWithTransformationUnit();
		request.tpID = "$FtsRowset_Transformer";
		request.transformationUnitID = "6";

		/* INPUT */
		Input input = new Input();
		input.inputType = "TMDataSource";
		input.inputValue = id;
		request.inputs = Arrays.asList(input);

		/* OUTPUT */
		request.output = new Output();
		request.output.outputType = "RS2";

		/* TARGET CONTENT TYPE */
		request.targetContentType = new ContentType();
		request.targetContentType.mimeType = "text/xml";
		Parameter param = new Parameter("schemaURI", "http://ftrowset.xsd");
		
		request.targetContentType.parameters = Arrays.asList(param);

		/* PROGRAM PARAMETERS */
		Parameter xsltParameter1 = new Parameter("xslt:1", "$BrokerXSLT_DwC_anylanguage_to_ftRowset_anylanguage");
		Parameter xsltParameter2 = new Parameter("xslt:2", "$BrokerXSLT_Properties_anylanguage_to_ftRowset_anylanguage");
		Parameter xsltParameter3 = new Parameter("xslt:3", "$BrokerXSLT_PROVENANCE_anylanguage_to_ftRowset_anylanguage");
		Parameter xsltParameter4 = new Parameter("finalftsxslt", "$BrokerXSLT_wrapperFT");
		
		Parameter indexTypeParameter = new Parameter("indexType", "ft_2.0");

		request.tProgramUnboundParameters = Arrays.asList(xsltParameter1, xsltParameter2, xsltParameter3, xsltParameter4, indexTypeParameter);

		request.filterSources = false;
		request.createReport = false;

		TransformDataWithTransformationUnitResponse response = null;
		try {
			response = proxyRandom.transformDataWithTransformationUnit(request);
		} catch (DTSException e) {
			e.printstacktrace();
		}
		String output = response.output;	
	}
}

Creating forward rowsets from tree collection

The first example demonstrates how it is possible to create forward rowsets from a tree collection. In the input field of the request we set as input type the content collection data source input type which is TMDataSource and as value the tree collection id (see Data Sources). In the output field is specified that the result of the transformation will be appended into a result set which is created by the data sink and returned in the response (see Data Sinks). Finally, the transformation procedure of DTS that is used in this example transformData is able to identify by itself the appropriate transformation units that will be used to transform the input data to the target content type text/xml, schemaURI="http://fwrowset.xsd". The target content type is specified in the respective request parameter.

import java.net.URI;
import java.net.URISyntaxException;
import java.util.Arrays;

import org.gcube.common.clients.ClientRuntime;
import org.gcube.common.scope.api.ScopeProvider;
import org.gcube.datatransformation.client.library.beans.Types.*;
import org.gcube.datatransformation.client.library.exceptions.DTSException;
import org.gcube.datatransformation.client.library.proxies.DTSCLProxyI;
import org.gcube.datatransformation.client.library.proxies.DataTransformationDSL;

public class DTSClient_CreateFWRowsetFromContent {

	public static void main(String[] args) throws Exception {
		String scope = args[0];
		String id = args[1];
		ScopeProvider.instance.set(scope);
		DTSCLProxyI proxyRandom = DataTransformationDSL.getDTSProxyBuilder().build();

		TransformDataWithTransformationUnit request = new TransformDataWithTransformationUnit();
		request.tpID = "$FwRowset_Transformer";
		request.transformationUnitID = "1";

		/* INPUT */
		Input input = new Input();
		input.inputType = "TMDataSource";
		input.inputValue = id;
		request.inputs = Arrays.asList(input);

		/* OUTPUT */
		request.output = new Output();
		request.output.outputType = "RS2";

		/* TARGET CONTENT TYPE */
		request.targetContentType = new ContentType();
		request.targetContentType.mimeType = "text/xml";
		Parameter param = new Parameter("schemaURI", "http://fwrowset.xsd");
		
		request.targetContentType.parameters = Arrays.asList(param);

		/* PROGRAM PARAMETERS */
		Parameter xsltParameter1 = new Parameter("xslt:1", "$BrokerXSLT_DwC_anylanguage_to_fwRowset_anylanguage");
		Parameter xsltParameter2 = new Parameter("finalfwdxslt", "$BrokerXSLT_wrapperFWD");
		
		request.tProgramUnboundParameters = Arrays.asList(xsltParameter1, xsltParameter2);

		request.filterSources = false;
		request.createReport = false;

		TransformDataWithTransformationUnitResponse response = null;
		try {
			response = proxyRandom.transformDataWithTransformationUnit(request);
		} catch (DTSException e) {
			e.printStackTrace();
		}
		String output = response.output;
	}
}

Finding applicable transformation units

This example demonstrates how it is possible to search for transformation units that are able to perform a transformation from a source to a target content type. In this example we are trying to find one or more transformation units that can transform a gif image to jpeg format.

import org.gcube.common.clients.ClientRuntime;
import org.gcube.common.scope.api.ScopeProvider;
import org.gcube.datatransformation.client.library.beans.Types.*;
import org.gcube.datatransformation.client.library.exceptions.DTSException;
import org.gcube.datatransformation.client.library.proxies.DTSCLProxyI;
import org.gcube.datatransformation.client.library.proxies.DataTransformationDSL;

public class FindApplicableTransformationUnitsClient {
	
	public static void main(String[] args) throws Exception {
		ScopeProvider.instance.set(args[0]);
		DTSCLProxyI proxyRandom = DataTransformationDSL.getDTSProxyBuilder().build();

		FindApplicableTransformationUnits request = new FindApplicableTransformationUnits();
		
		request.sourceContentType = new ContentType();
		request.sourceContentType.mimeType = "image/gif";

		request.targetContentType = new ContentType();
		request.targetContentType.mimeType = "image/jpeg";

		request.createAndPublishCompositeTP = false;
		FindApplicableTransformationUnitsResponse output = null;
		try {
			output = proxyRandom.findApplicableTransformationUnits(request);
		} catch (DTSException e) {
			e.printstacktrace();
		}

		for(TPAndTransformationUnit tr : output.TPAndTransformationUnitIDs) {
			System.out.println(tr.transformationProgramID + " " + tr.transformationUnitID);
		}
	}
}