Conceptually, an instance of Taverna Server exists to manage a collection of workflow runs, as well as some global information that is provided to all on the server’s general capabilities. The server also supports an overall Atom feed per user that allows you to find out when your workflows terminate without having to poll each one separately. This feed is at
https://«SERVER:PORT»/taverna-server/feed (with the default web-application name). The feed is not available to anonymous users, and will only accept updates from the internal notification mechanism.
Each workflow run is associated with a working directory that is specific to that run; the name of the working directory is a value that is not repeated for any other run. Within the working directory, these[1|#_ftn1] subdirectories will be created:
Contains optional additional configuration files for the Taverna execution engine; empty by default.
Contains optional additional configuration files for the external tool plugin; empty by default.
Contains additional libraries that will be made available to beanshell scripts; empty by default.
Location that logs will be written to. In particular, will eventually contain the file detail.log, which can be very useful when debugging a workflow.
Location that output files will be written to if they are not collected into a Baclava file. This directory is only created during the workflow run; it should not be made beforehand.
Contains the additional plug-in code that is to be supported for the specific workflow run.
Contains the database working files used by the Taverna execution engine.
All file access operations are performed on files and directories beneath the working directory. The server prevents all access to directories outside of that, so as to promote proper separation of the workflow runs. (Note in particular that the credential manager configuration directory will not be accessible; it is managed directly by the server.)
Associated with each workflow run is a state. The state transition diagram is this:
The blue states are the initial and final states, and all states in italic cannot be observed in practice. The black arrows represent automatic state changes, the blue arrows are for manually-triggered transition, and the red arrows are destructions, which can be done from any state (other than the initial unobservable one) and which may be either manually or automatically triggered; automatic destruction happens when the run reaches its expiry time (which you can set but cannot remove). Note that there are two transitions from Operating to Finished; they are not equivalent. The automatic transition represents the termination of the workflow execution with such outputs produced as are going to be generated, whereas the manual transition is where the execution is killed and outputs may be not generated even if they conceptually existed at that point. Also note that it is only the transition from Initialized to Operating that represents the start of the workflow execution engine.
Each workflow run is associated with a unique identifier, which is constant for the life of the run. This identifier is used directly by the SOAP interface and forms part of the URI for a run in the REST interface, but it is the same between the two. Any run may be accessed and manipulated via either interface, so long as the right identifier is used and you have permission to do the action concerned. The permissions associated with a run are the ability to read features of the run and files associated with it, the ability to update features (including creating files), and the ability to control the lifespan of a run and destroy it, each of which implies the ones before it as well. The owner of a run (i.e., the user who created it) always has all those permissions, and can also manipulate the security configuration of the run — these permissions and any credentials granted to the run such as passwords and X.509 key-pairs — which are otherwise totally shrouded in the execution interface. The permissions of a user to access a particular run can also be set to none, which removes all granted permissions and restores the default (no access granted at all).
Associated with each run are a number of listeners. This release of the server only supports a single listener, “io”, which is applied automatically. This listener is responsible for detecting a number of technical features of the workflow run and exposing them. In particular, it reports any output produced by the workflow engine on either stdout or stderr, what the result (“exitcode”) would be, where to send termination notifications to (“notificationAddress”) and what resources were used during the workflow run (“usageRecord”).
The (RESTful) Usage Pattern
The Taverna 2 Server supports both REST and SOAP APIs; you may use either API to access the service and any of the workflow runs hosted by the service. The full service descriptions are available at
http://«SERVER:PORT»/taverna-server/services but to illustrate their use, here's a sample execution using the REST API.
- The client starts by creating a workflow run. This is done by POSTing a T2flow document to the service at the address
http://«SERVER:PORT»/taverna-server/rest/runs; may be either wrapped with XML or an unwrapped T2flow document (provided the right HTTP content-type is used).
When using the wrapped form, the wrapping of the submitted document is a single XML element, workflow in the namespace
ns.taverna.org.uk/2010/xml/server/, and the workflow (as saved by the Taverna Workbench) is the child element of that.
The result of the POST is an HTTP 201 Created that gives the location of the created run (in a Location header), hereby denoted the
«RUN_URI»(it includes a UUID which you will need to save in order to access the run again, though the list of known UUIDs can be found above). Note that the run is not yet actually doing anything.
- Next, you need to set up the inputs to the workflow ports. This is done by either uploading a file that is to be read from, or by directly setting the value.
- Directly Setting the Value of an Input
To set the input port,
FOO, to have the value
BAR, you would PUT a message like this to the URI
- Uploading a File for One Input
The values for an input port can also be set by means of creating a file on the server. Thus, if you were staging the value
BARto input port
FOOby means of a file
BOO.TXTthen you would first POST this message to
Note that “
QkFS” is the base64-encoded form of “
BAR”, and that each workflow run has its own working directory into which uploaded files are placed; you are never told the name of this working directory.
You can also PUT the contents of the file (as application/octet-stream) directly to the virtual resource name that you want to create the file as; for the contents “
BAR” that would be three bytes 66, 65, 82 (with appropriate HTTP headers). This particular method supports upload of very large files if necessary.
Once you've created the file, you can then set it to be the input for the port by PUTting this message to
Note the similarity of the final part of this process to the previous method for setting an input.
You can also create a directory, e.g.,
IN, to hold the input files. This is done by POSTing a different message to
With that, you can then create files in the
INsubdirectory by sending the upload message to
«RUN_URI»/wd/INand you can use the file as an input by using a name such as
IN/BOO.TXT. You can also create sub-subdirectories if required by sending the
mkdirmessage to the natural URI of the parent directory, just as sending an upload message to that URI creates a file in that directory.
- Using a File Already on the Taverna Server Installation
You can use an existing file attached to a workflow run on the same server, provided you have permission to access that run. You do this by using a PUT to set the input to a reference (the actual URL below is just an example, but it must be the full URL to the file):
The data will be copied across efficiently into a run-local file. This version of Taverna Server does not support accessing files stored on any other server or on the general web via this mechanism.
- Uploading a Baclava File
The final way of setting up the inputs to a workflow is to upload (using the same method as above) a Baclava file (e.g.,
FOOBAR.BACLAVA) that describes the inputs. This is then set as the provider for all inputs by PUTting the name of the Baclava file (as plain text) to
- Directly Setting the Value of an Input
- If your workflow depends on external libraries (e.g., for a beanshell or API consumer service), these should be uploaded to
«RUN_URI»/wd/lib; the name of the file that you create there should match that which you would use in a local run of the service.
- If the workflow refers to a secured external service, it is necessary to supply some additional credentials. For a SOAP web-service, these credentials are associated in Taverna with the WSDL description of the web service. The credentials must be supplied before the workflow run starts.
To set a username and password for a service, you would POST to
«RUN_URI»/security/credentialsa message like this (assuming that the WSDL address is “
//host/serv.wsdl”, that the username to use is “
fred123”, and that the password is “
For REST services, the simplest way to find the correct security URI to use with the service is to run a short workflow against the service in the Taverna Workbench and to then look up the URI in the credential manager.
- Now you can start the workflow running. This is done by using a PUT to set
«RUN_URI»/statusto the plain text value
- Now you need to poll, waiting for the workflow to finish. To discover the state of a run, you can (at any time) do a GET on
«RUN_URI»/status; when the workflow has finished executing, this will return
Initialized, the starting state).
There is a fourth state,
Stopped, but it is not supported in this release.
- Every workflow run has an expiry time, after which it will be destroyed and all resources (i.e., local files) associated with it cleaned up. By default in this release, this is 1 day after initial creation. To see when a particular run is scheduled to be disposed of, do a GET on
«RUN_URI»/expiry; you may set the time when the run is disposed of by PUTting a new time to that same URI. Note that this includes not just the time when the workflow is executing, but also when the input files are being created beforehand and when the results are being downloaded afterwards; you are advised to make your clients regularly advance the expiry time while the run is in use.
- The outputs from the workflow are files created in the out subdirectory of the run's working directory. The contents of the subdirectory can be read by doing a GET on
«RUN_URI»/wd/outwhich will return an XML document describing the contents of the directory, with links to each of the files within it. Doing a GET on those links will retrieve the actual created files (as uninterpreted binary data).
Thus, if a single output
FOO.OUTwas produced from the workflow, it would be written to the file that can be retrieved from
«RUN_URI»/wd/out/FOO.OUTand the result of the GET on
«RUN_URI»/wd/outwould look something like this:
- The standard output and standard error from the T2 Command Line Executor subprocess can be read via properties of the special I/O listener. To do that, do a GET on
.../stderr). Once the subprocess has finished executing, the I/O listener will provide a third property containing the exit code of the subprocess, called
Note that the supported set of listeners and properties will be subject to change in future versions of the server, and should not be relied upon.
- Once you have finished, destroy the run by doing a DELETE on
«RUN_URI». Once you have done that, none of the resources associated with the run (including both input and output files) will exist any more. If the run is still executing, this will also cause it to be stopped.
All operations described above have equivalents in the SOAP service interface.
[1|#ftnref1] Each run also has _repository and var directories created for it; their purpose is not documented and they are initially empty.