
doi: 10.5281/zenodo.15226531 , 10.5281/zenodo.17943660 , 10.5281/zenodo.17859365 , 10.5281/zenodo.15226533 , 10.5281/zenodo.15226538 , 10.5281/zenodo.15425168 , 10.5281/zenodo.15352958 , 10.5281/zenodo.15556458 , 10.5281/zenodo.15226530 , 10.5281/zenodo.15359438 , 10.5281/zenodo.15353200 , 10.5281/zenodo.15305917 , 10.5281/zenodo.15571909 , 10.5281/zenodo.15849236 , 10.5281/zenodo.16413842 , 10.5281/zenodo.15788523 , 10.5281/zenodo.15633237 , 10.5281/zenodo.15226541 , 10.5281/zenodo.15312968 , 10.5281/zenodo.16905737 , 10.5281/zenodo.15226534 , 10.5281/zenodo.15784390 , 10.5281/zenodo.15658166 , 10.5281/zenodo.17836524 , 10.5281/zenodo.15226543 , 10.5281/zenodo.15446631 , 10.5281/zenodo.15226535 , 10.5281/zenodo.15226542 , 10.5281/zenodo.15311631 , 10.5281/zenodo.15556413 , 10.5281/zenodo.15226529 , 10.5281/zenodo.15311312 , 10.5281/zenodo.17886275 , 10.5281/zenodo.16782845 , 10.5281/zenodo.15360189 , 10.5281/zenodo.15722803 , 10.5281/zenodo.15226532
doi: 10.5281/zenodo.15226531 , 10.5281/zenodo.17943660 , 10.5281/zenodo.17859365 , 10.5281/zenodo.15226533 , 10.5281/zenodo.15226538 , 10.5281/zenodo.15425168 , 10.5281/zenodo.15352958 , 10.5281/zenodo.15556458 , 10.5281/zenodo.15226530 , 10.5281/zenodo.15359438 , 10.5281/zenodo.15353200 , 10.5281/zenodo.15305917 , 10.5281/zenodo.15571909 , 10.5281/zenodo.15849236 , 10.5281/zenodo.16413842 , 10.5281/zenodo.15788523 , 10.5281/zenodo.15633237 , 10.5281/zenodo.15226541 , 10.5281/zenodo.15312968 , 10.5281/zenodo.16905737 , 10.5281/zenodo.15226534 , 10.5281/zenodo.15784390 , 10.5281/zenodo.15658166 , 10.5281/zenodo.17836524 , 10.5281/zenodo.15226543 , 10.5281/zenodo.15446631 , 10.5281/zenodo.15226535 , 10.5281/zenodo.15226542 , 10.5281/zenodo.15311631 , 10.5281/zenodo.15556413 , 10.5281/zenodo.15226529 , 10.5281/zenodo.15311312 , 10.5281/zenodo.17886275 , 10.5281/zenodo.16782845 , 10.5281/zenodo.15360189 , 10.5281/zenodo.15722803 , 10.5281/zenodo.15226532
# Terra Scientific Pipelines Service [](https://sonarcloud.io/summary/new_code?id=DataBiosphere_terra-scientific-pipelines-service) ## Overview Terra Scientific Pipelines Service, or Teaspoons, facilitates running a number of defined scientific pipelines on behalf of users that users can't run themselves in Terra. The most common reason for this is that the pipeline accesses proprietary data that users are not allowed to access directly, but that may be used as e.g. a reference panel for imputation. ## Supported pipelines Current supported pipelines are: - [in development] Imputation (TODO add link/info) ## Architecture [Architecture Doc](https://docs.google.com/document/d/1dAPwOG2z1h0B5CszeQ0DfyToniNV_3y1OBV7x7L8ofI/edit?usp=sharing) [Architecture Diagram](https://lucid.app/lucidchart/2f067b5e-2d40-41b4-a5f3-a9dc72d83820/edit?viewport_loc=-72%2C25%2C1933%2C1133%2C0_0&invitationId=inv_97522cca-1b6d-44fe-9552-8f959d410dd7) ## Development This codebase is in initial development. ### Requirements #### Technical This service is written in Java 17, and uses Postgres 13. To run locally, you'll also need: - jq - install with `brew install jq` - Java 17 - can be installed manually or through IntelliJ which will do it for you when importing the project - Postgres 13 - multiple solutions here as long as you have a postgres instance running on localhost:5432 the local app will connect appropriately. Be sure to use Postgres 13 (as of Feb 2025, Postgres 17 did not work) - Download Postgres.app (recommended) from https://postgresapp.com/ - Brew https://formulae.brew.sh/formula/postgresql@13 #### External Services Terra services - Sam - Used to authn users connecting to the service and authz users for admin endpoints - Rawls - Used to handle workspace interactions - creating methods - data tables - workflow submission - Cromwell - Used through Rawls to run submissions - Thurloe - Used to send notification emails to users ### Tech stack - Java 17 temurin - Postgres 13.1 - Gradle - build automation tool - SonarQube - static code security and coverage - Trivy - security scanner for docker images - Jib - docker image builder for Java ### Local development To run locally: 1. Make sure you have the requirements installed from above. We recommend IntelliJ as an IDE. 2. Clone the repo (if you see broken inputs build the project to get the generated sources) 3. Spin up a local postgres instance (NOTE: use version 13.1) 3. Run the commands in `scripts/postgres-init.sql` in your local postgres instance. You will need to be authenticated to access GSM. 4. Run `scripts/write-config.sh` 5. Run `./gradlew bootRun` to spin up the server. 6. Navigate to [http://localhost:8080/#](http://localhost:8080/#) 7. If this is your first time deploying to any environment, be sure to use the admin endpoint `/api/admin/v1/pipelines/{pipelineName}/{pipelineVersion}` to set your pipeline's workspace id. 1. To run this endpoint, you need to be authenticated using your firecloud test account. A list of accounts that developers typically need is [here](https://broadworkbench.atlassian.net/wiki/spaces/TSPS/pages/3699605509/Accounts+for+developers). Further, a list of resources that are generally useful is stored [here](https://broadworkbench.atlassian.net/wiki/spaces/TSPS/pages/2887778308/Teaspoons+Resources) 2. This endpoint requires two parameters directly, and three in the message body: 1. pipelineName can be retrieved by querying the `/api/pipelines/v1` endpoint. 2. pipelineVersion can also be retrieved from the `/api/pipelines/v1` endpoint. 3. workspaceBillingProject is listed in the Teaspoons Resources document linked above 4. workspaceName is also listed in the Teaspoons Resources document, and can be found through the Terra UI workspace dashboard 5. wdlMethodVersion is found for the specific workflow as listed in the Terra UI page for workflows. You've also got to include stuff about poetry. Not using venv for that and using poetry 1.8.5 - I think that's all in the CLI stuff though. Also #### Local development with debugging If using Intellij (only IDE we use on the team), you can run the server with a debugger. Follow the steps above but instead of running `./gradlew bootRun` to spin up the server, you can run (debug) the App.java class through intellij and set breakpoints in the code. Be sure to set the GOOGLE_APPLICATION_CREDENTIALS=config/teaspoons-sa.json in the Run/Debug configuration Environment Variables. ### Testing the CLI locally If you make changes to [openapi.yml](common/openapi.yml), you should test the CLI locally. To create the autogenerated Python client files locally, run ``` bash ./gradlew openApiGenerate ``` The files will be generated in `python-client/generated` and are ignored from being checked into the repo. To test with the CLI, follow the instructions in the CLI repo: [DataBiosphere/terra-scientific-pipelines-service-cli](https://github.com/DataBiosphere/terra-scientific-pipelines-service-cli/blob/main/CONTRIBUTING.md). ### Running Tests/Linter Locally - Testing - Run `./gradlew service:test` to run tests - Linting - Run `./gradlew spotlessCheck` to run linter checks - Run `./gradlew :service:spotlessApply` to apply fix any issues the linter finds ### (Optional) Install pre-commit hooks 1. [scripts/git-hooks/pre-commit] has been provided to help ensure all submitted changes are formatted correctly. To install all hooks in [scripts/git-hooks], run: ```bash git config core.hooksPath scripts/git-hooks ``` ### Running SonarQube locally [SonarQube](https://www.sonarqube.org) is a static analysis code that scans code for a wide range of issues, including maintainability and possible bugs. Get more information from [DSP SonarQube Docs](https://dsp-security.broadinstitute.org/appsec-team-internal/appsec-team-internal/security-activities/sast-1#) If you get a build failure due to SonarQube and want to debug the problem locally, you need to get the sonar token from GSM before running the gradle task. ```shell export SONAR_TOKEN=$(gcloud secrets versions access latest --project="broad-dsde-dev" --secret="teaspoons-sonarcloud" | jq '.sonar_token') ./gradlew sonarqube ``` Running this task produces no output unless your project has errors. To generate a report, run using `--info`: ```shell ./gradlew sonarqube --info ``` ### Connecting to the database To connect to the Teaspoons database, we have a script in [dsp-scripts](https://github.com/broadinstitute/dsp-scripts) that does all the setup for you. Clone that repo and make sure you're either on Broad Internal wifi or connected to the VPN. Then run the following command: ```shell ./db/psql-connect.sh dev teaspoons ``` ### Deploying to dev Upon merging to main, the dev environment will be automatically deployed via the GitHub Action [Bump, Tag, Publish, and Deploy](https://github.com/DataBiosphere/terra-scientific-pipelines-service/actions/workflows/tag-publish.yml) (that workflow is defined [here](https://github.com/DataBiosphere/terra-scientific-pipelines-service/blob/main/.github/workflows/tag-publish.yml)). The two tasks `report-to-sherlock` and `set-version-in-dev` will prompt Sherlock to deploy the new version to dev. You can check the status of the deployment in [Beehive](https://beehive.dsp-devops.broadinstitute.org/apps/teaspoons) and in [ArgoCD](https://ap-argocd.dsp-devops.broadinstitute.org/applications/ap-argocd/teaspoons-dev). For more information about deployment to dev, check out DevOps' [excellent documentation](https://docs.google.com/document/d/1lkUkN2KOpHKWufaqw_RIE7EN3vN4G2xMnYBU83gi8VA/). ### Tracing We use [OpenTelemetry](https://opentelemetry.io/) for tracing, so that every request has a tracing span that can be viewed in [Google Cloud Trace](https://cloud.google.com/trace). See [this DSP blog post](https://broadworkbench.atlassian.net/wiki/x/AoGlrg) for more info. ### Running the end-to-end tests The end-to-end test is specified in `.github/workflows/run-e2e-tests.yaml`. It calls [the test script defined in the dsp-reusable-workflows repo](https://github.com/broadinstitute/dsp-reusable-workflows/blob/main/e2e-test/teaspoons_gcp_e2e_test.py). The end-to-end test is automatically run nightly on the dev environment. To run the test against a specific feature branch: 1. Grab the image tag for your feature branch. > If you've opened a PR, you can find the image tag as follows: > - go to the Bump, Tag, Publish, and Deploy workflow that's triggered each time you push to your branch > - From there, go to the tag-publish-docker-deploy task > - Expand the "Construct docker image name and tag" step > - The first line should contain the image tag, something like "0.0.81-6761487". 2. Navigate to the [e2e-test GHA workflow](https://github.com/DataBiosphere/terra-scientific-pipelines-service/actions/workflows/run-e2e-tests.yaml) 3. Click on the "Run workflow" button and select your branch from the dropdown - Enter the image tag from step 1 in the "Custom image tag" field - If you've updated the end-to-end test in the dsp-resuable-workflows repo, enter either a commit hash or your git branch name. If you don't need to change the test, leave the default as main. 4. Click the green "Run workflow" button. ## Python clients We publish a "thin", auto-generated Python client that wraps the Teaspoons APIs. This client is published to [PyPi](https://pypi.org/project/terra-scientific-pipelines-service-api-client/) and can be installed with `pip install teaspoons_client`, although this is not meant to be user-facing. The thin api client is generated from the OpenAPI spec in the `openapi` directory. Publishing occurs automatically when a new version of the service is deployed, via the [release-python-client GHA](https://github.com/DataBiosphere/terra-scientific-pipelines-service/blob/main/.github/workflows/release-python-client.yml). We also have a user-facing, "thick" CLI whose code lives in a separate repository: [DataBiosphere/terra-scientific-pipelines-service-cli](https://github.com/DataBiosphere/terra-scientific-pipelines-service-cli).
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
