Difference between revisions of "ServiceManager Guide"
(→GIS Technologies) |
(→GeoServer) |
||
Line 465: | Line 465: | ||
==GeoServer== | ==GeoServer== | ||
− | In order to let ''gisinterface'' library discover instances of Geoserver, an Access Point must be | + | In order to let ''gisinterface'' library discover instances of Geoserver, an Access Point must be defined for each instance. |
+ | The Service Endpoint resource for such Access Points must have : | ||
+ | * Category : Gis | ||
+ | * Platform/Name : GeoServer | ||
=Tabular Data Manager= | =Tabular Data Manager= |
Revision as of 16:33, 11 December 2014
This part of the guide is intended to cover the installation and configuration of gCube services that are not mentioned in the Administration guide. Mainly we refer to services that are not Enabling and that can be installed dynamically by the Infrastructure/VO Managers. The list includes also for each component known issues and specific configuration steps to follow.
Search
Search V 2.xx
The installation of a Search Node in gCube is characterised by the installation of 2 web-services ( in the minimal configuration ) :
- SearchSystemService
- ExecutionEngineService
This is the minimal installation scenario but it's possible to enable distributed search as well and this will required the installation and configuration of several ExecutionEngineServices
HW requirements
The minimal installation requirements for a Search node are a Single CPU node with 2GB RAMm but it's more than recommended to have at least 3GB RAM on the node dedicated to the GHN.
Configuration
The SearchSystemService and ExecutionEngineService have to be automatically/manually deployed in a VRE scope. In addition if we want to configure the SearchSystemService to exploit the local ExecutionEngineService to run the queries ( minimal installation) we should configure the jndi service as follows:
- excludeLocal = false
- collocationThreshold = 0.3f
- complexPlanNumNodes = 800000
Search v 3.x.x
The 3.0 version has moved to Smartgears and tomcat.
The requirement of the codeployment with Execution Engine Service is also there , so the Execution Engine Service v 2.0.0 has been also ported to SmartGears
HW requirements
The minimal installation requirements for a Search node are a Single CPU node with 2GB RAMm but it's more than recommended to have at least 3GB RAM on the node dedicated to the GHN.
Configuration
in order to fix an issue with datanucleus compatibility and java 7 there is a change to be included in the tomcat configuration:
- uncomment and modify the following line on the $CATALINA_HOME/bin/catalina.sh file:
JAVA_OPTS="$JAVA_OPTS -noverify -Dorg.apache.catalina.security.SecurityListener.UMASK=`umask`"
- The conf file $CATALINA_HOME/conf/infrastructure.properties containing infra and scope informations needs to be present
# a single infrastructure infrastructure=d4science.research-infrastructures.eu # multiple scopes must be separated by a common (e.g FARM,gCubeApps) scopes=Ecosystem clientMode=false
- The conf file $CATALINA_HOME/webapps/<search>WEB-INF/classes/deploy.properties needs to be filled with this info:
hostname = xx startScopes = xx port=xx
Known Issues
Excecution Engine
The 2.0 version has moved to Smartgears and tomcat.
HW requirements
The minimal installation requirements for an Execution Engine node are a Single CPU node with 2GB RAMm but it's more than recommended to have at least 3GB RAM on the node dedicated to the GHN.
Installation
Different packagings of the Execution engine are available depending on the service they are going to be co-deployed with and invoked:
- DTS : <artifactId>executionengineservice-dts</artifactId>
- Search: <artifactId>executionengineservice-search</artifactId>
Configuration
in order to fix an issue with datanucleus compatibility and java 7 there is a change to be included in the tomcat configuration:
- uncomment and modify the following line on the $CATALINA_HOME/bin/catalina.sh file:
JAVA_OPTS="$JAVA_OPTS -noverify -Dorg.apache.catalina.security.SecurityListener.UMASK=`umask`"
- The conf file $CATALINA_HOME/conf/infrastructure.properties containing infra and scope informations needs to be present
# a single infrastructure infrastructure=d4science.research-infrastructures.eu # multiple scopes must be separated by a common (e.g FARM,gCubeApps) scopes=Ecosystem clientMode=false
- The conf file $CATALINA_HOME/webapps/<execution-engine>WEB-INF/classes/deploy.properties needs to be filled with this info:
hostname = xx startScopes = xx port=xx pe2ng.port = 4000
- in case the exeucution engine needs to call DTS on the container.xml add:
<property name='dts.execution' value='true' />
Executor and GenericWorker
HW requirements
The minimal installation requirements for an Executor node with a Generic Worker plugin are a Single CPU node with 2GB RAM but it's more than recommended to have at least 3GB RAM on the node dedicated to the GHN.
Configuration
The following Software should be installed on the VM:
- R version 2.14.1
whit the following components
- coda
- R2jags
- R2WinBUGS
- rjags
- bayesmix
- runjags
Known Issues
- The GenericWorker is exploited by the Statistical Manager service to run distributed computations. Given that the SM use the root scope to discover instances of the GenericWorker. the plugin must be deployed at root scope level
- Given that the GenericWorker plugin depends on the Executor Service, when dynamically deploying the plugin the Executor Service is also deployed.
DTS
DTS v2.x
HW requirements
The minimal installation requirements for an DTS node are a Single CPU node with 2GB RAMm but it's more than recommended to have at least 3GB RAM on the node dedicated to the GHN.
Configuration
DTS uses Execution Engine to run the transformations so at least one Execution Engine should be deployed in the same scope as DTS and the related GHNLabels.xml file should contain:
<Variable> <Key>dts.execution</Key> <Value>true</Value> </Variable>
Known Issues
none
DTS v3.x
HW requirements
The minimal installation requirements for an DTS node with a Generic Worker plugin are a Single CPU node with 2GB RAMm but it's more than recommended to have at least 3GB RAM on the node dedicated to the GHN.
Configuration
- The conf file $CATALINA_HOME/conf/infrastructure.properties containing infra and scope informations needs to be present
# a single infrastructure infrastructure=d4science.research-infrastructures.eu # multiple scopes must be separated by a common (e.g FARM,gCubeApps) scopes=Ecosystem clientMode=false
- The conf file $CATALINA_HOME/webapps/<dts>/WEB-INF/classes/deploy.properties needs to be filled with this info:
hostname = xx startScopes = xx port=xx
DTS uses Execution Engine to run the transformations so at least one Execution Engine should be deployed in the same scope as DTS and the related Smartgears conf file ( container.xml ) should have this properties:
<property name='dts.execution' value='true' />
Index
Index Service
The Index Service is the latest released Restful Service running on Smartgears. It implements both FW and FT index functionalitoes
HW requirements
Given codeployment with ElasticSearch ( embedded) it's recommended at least a VM with 4GB RAM and 2 CPUs.
Also open file limit should be raised to 32000
Configuration
Details on the Index Service configuration are available at https://gcube.wiki.gcube-system.org/gcube/index.php/Index_Management_Framework#Deployment_Instructions
ForwardIndexNode ( Dismissed)
The ForwardIndexNode service needs to be codeployed with an instance of CouchBase service
HW requirements
Given codeployment with Couchbase it's recommended at least a VM with 4GB RAM and 2 CPUs.
Configuration
The installation of Couchbase should be performed manually and it depends on the OS ( binary package, rpm, debs).
It's recommended to put an higher limit of the open files on the VM ( 32000 min).
The configuration for the FWIndexNode that should be customized (jndi file):
- couchBaseIP = IP of the server hosting Couchbase ( so the same as the GHN)
- couchBaseUseName = the username set when configuring Couchbase
- couchBasePassword = the password set when configuring Couchbase
Once configured it's needed to initialize Couchbase using the cb_initialize_node.sh script contained into the service configuration folder.
Known Issues
- Sometimes the cb_initialize_node.sh script fails, it could mean that there is not enough memory to inizialize the data bucket , try to reduce the value of ramQuota in the jndi file.
Statistical Manager
Resources
Runtime Resources | ' | ' |
DataStorage/StorageManager | VO/VRE | StorageManager |
Database/Obis2Repository | VRE | Trendylyzer |
Database/StatisticalManagerDatabase | INFRA/VO/VRE | Statistical |
Database/AquamapsDB | VO/VRE | Algorithms |
Database/FishCodesConversion | VO/VRE | Algorithms |
Database/FishBase | VO/VRE | Algorithms - TaxaMatch |
DataStorage/Storage Manager | INFRA/VO/VRE | All |
Gis/Geoserver1..n | VRE | Maps Algorithms |
Gis/TimeSeriesDatastore | VO/VRE | Maps Algorithms |
Gis/GeoNetwork | VRE | Maps Algorithms |
Service/MessageBroker | VO | Service |
BiodiversityRepository/CatalogofLife | VO/VRE | Occurrence Algorithms |
BiodiversityRepository/GBIF | VO/VRE | Occurrence Algorithms |
BiodiversityRepository/ITIS | VO/VRE | Occurrence Algorithms |
BiodiversityRepository/WoRDSS | VO/VRE | Occurrence Algorithms |
BiodiversityRepository/WoRMS | VO/VRE | Occurrence Algorithms |
BiodiversityRepository/OBIS | VO/VRE | Occurrence Algorithms |
BiodiversityRepository/NCBI | VO/VRE | Occurrence Algorithms |
BiodiversityRepository/SpeciesLink | VO/VRE | Occurrence Algorithms |
WS Resources | ' | ' |
Workers | INFRA/VO | Parallel Computations |
Generic Resources | ' | ' |
ISO/MetadataConstants | VO/VRE | Maps Algorithms |
Known Issues
Tested on ghn 4.0.0 and StatisticalManager service 1.4.0:
- install the SM on the same network where the database and the used resources are located. Otherwise it would imply to restart production databases because direct access could not be granted to such resources.
- remove lib axis-1.4.jar from gCore/lib
- replace the library hsqldb-1.8.jar with the library hsqldb-2.2.8.jar in gCore/lib
Services and Databases used by the Statistical Manager and Data Analysis facilities
GHN
gcube@statistical-manager1.d4science.org
gcube@statistical-manager2.d4science.org
gcube@statistical-manager3.d4science.org
gcube@statistical-manager4.d4science.org
gcube2@statistical-manager.d.d4science.org
TOMCAT
(root user)
thredds.research-infrastructures.eu
wps.statistical.d4science.org
rstudio.p.d4science.research-infrastructures.eu
geoserver.d4science.org
geoserver2.d4science.org
geoserver3.d4science.org
geoserver4.d4science.org
geoserver-dev.d4science-ii.research-infrastructures.eu
geoserver-dev2.d4science-ii.research-infrastructures.eu
geonetwork.geothermaldata.d4science.org
geonetwork.d4science.org
THIRD PARTY SERVICES
(root user)
rstudio.p.d4science.research-infrastructures.eu (sw rstudio, command: rstudio-server restart)
DATABASES
(root user)
geoserver-db.d4science.org
node49.p.d4science.research-infrastructures.eu
biodiversity.db.i-marine.research-infrastructures.eu
db1.p.d4science.research-infrastructures.eu
db5.p.d4science.research-infrastructures.eu
dbtest.research-infrastructures.eu
dbtest3.research-infrastructures.eu
geoserver.d4science-ii.research-infrastructures.eu
geoserver2.i-marine.research-infrastructures.eu
geoserver-db.d4science.org
geoserver-test.d4science-ii.research-infrastructures.eu
node50.p.d4science.research-infrastructures.eu
node49.p.d4science.research-infrastructures.eu
node59.p.d4science.research-infrastructures.eu
obis2.i-marine.research-infrastructures.eu
statistical-manager.d.d4science.org
WORKER NODES
(gcube2 user)
(production)
node3.d4science.org
node4.d4science.org
node11.d4science.org
node12.d4science.org
node13.d4science.org
node14.d4science.org
node15.d4science.org
node16.d4science.org
node18.d4science.org
node20.d4science.org
node21.d4science.org
node23.d4science.org
node27.d4science.org
node28.d4science.org
node29.d4science.org
node30.d4science.org
node31.d4science.org
node32.d4science.org
node33.d4science.org
node34.d4science.org
node35.d4science.org
node36.d4science.org
node37.d4science.org
node38.d4science.org
node39.d4science.org
(development)
node17.d4science.org
node19.d4science.org
node22.d4science.org
GIS Technologies
In order to handle GIS Technologies, developers should rely on libraries geonetwork and gisinterface. Both distributed under subsystem org.gcube.spatial.data. Depending on which libraries are used, different resources are mandatory.
Geonetwork
This sections covers the default behavior of geonetwork library. Please note that clients of the library might override it.
Geonetwork Service Discovery
A single Service Endpoint per Geonetwork instance is needed, you can find more details on the resource here.
Metadata Publication
In order to exploit the library's features to generate ISO metadata, the following Generic Resource is needed in the scope :
- Secondary Type : ISO
- Name : MetadataConstants
GeoServer
In order to let gisinterface library discover instances of Geoserver, an Access Point must be defined for each instance. The Service Endpoint resource for such Access Points must have :
- Category : Gis
- Platform/Name : GeoServer