Legacy Client

Deprecated since version 0.3.7: The legacy client is being phased out. Use flowbio.v2.Client instead.

Example Usage

import flowbio

client = flowbio.Client()
client.login("your_username", "your_password")

# Upload standard data
data = client.upload_data("/path/to/file.fa", progress=True, retries=5)
print(data)

# Upload sample
sample = client.upload_sample(
    "My Sample Name",
    "/path/to/reads1.fastq.gz",
    "/path/to/reads2.fastq.gz",  # optional
    progress=True,
    metadata={
        "sample_type": "RNA-Seq",
        "scientist": "Charles Darwin",
        "strandedness": "reverse",
    }
)
print(sample)

# Upload multiplexed
multiplexed = client.upload_multiplexed(
    "/path/to/reads.fastq.gz",
    progress=True,
    retries=5,
)
print(multiplexed)

# Upload annotation
annotation = client.upload_annotation(
    "/path/to/annotation.csv",
    progress=True,
    retries=5,
)
print(annotation)

# Run pipeline
execution = client.run_pipeline(
    "RNA-Seq",
    "3.8.1",
    "23.04.3",
    params={"param1": "param2"},
    data_params={"fasta": 123456789},
)
class flowbio.Client(url='https://api.flow.bio/graphql')

NB: This class is being deprecated in favour of flowbio.v2.Client. If you are able to use the v2 client we highly recommend you do so.

This is the legacy client used to interface with the Flow api. You can instantiate it like so:

client = flowbio.Client()

Alternatively, if you are working with a private instance of Flow, you can instantiate it with your own url pointing to the Flow API:

client = flowbio.Client("https://mycompany.flow.bio/api/graphql")
execute(*args, check_token=True, **kwargs)

Sends a request to the GraphQL server.

Parameters:
  • message (str) – The query to make.

  • method (str) – By default, POST requests are sent, but this can be overriden here.

  • variables (dict) – Any GraphQL variables can be passed here.

  • retries (int) – The number of times to retry on failure.

  • retry_statuses (list) – The HTTP statuses to retry on.

Return type:

dict

login(username, password)

Logs in the client and allows it to be used to access resources that requires a logged in user.

Parameters:
  • username (str) – The username of the user.

  • password (str) – The password of the user.

Return type:

None

refresh_token()

Refreshes the access token.

user(username)

Returns a user object.

Parameters:

username (str) – The username of the user.

Return type:

dict

data(id)

Returns a data object.

Parameters:

id (str) – The ID of the data.

Return type:

dict

execution(id)

Returns an execution.

Parameters:

id (str) – The ID of the execution.

Return type:

dict

run_pipeline(name, version, nextflow_version, params=None, data_params=None, sample_params=None, genome=None)

Runs a pipeline.

Parameters:
  • name (str) – The name of the pipeline.

  • version (str) – The version of the pipeline.

  • nextflow_version (str) – The version of Nextflow to use.

  • params (dict) – The parameters to pass to the pipeline.

  • data_params (dict) – The data parameters to pass to the pipeline.

  • sample_params (dict) – The sample parameters to pass to the pipeline.

  • genome (str) – The genome to use.

Return type:

dict

sample(id)

Returns a sample.

Parameters:

id (str) – The ID of the sample.

Return type:

dict

upload_annotation(path, ignore_warnings=False, chunk_size=1000000, progress=False, use_base64=False, retries=0)

Uploads an annotation sheet to the server.

Parameters:
  • path (str) – The path to the annotation sheet.

  • ignore_warnings (bool) – Whether to ignore warnings.

  • chunk_size (int) – The size of each chunk to upload.

  • progress (bool) – Whether to show a progress bar.

  • retries (int) – The number of times to retry the upload.

Return type:

dict

upload_data(path, chunk_size=1000000, progress=False, use_base64=False, retries=0)

Uploads a file to the server.

Parameters:
  • path (str) – The path to the file.

  • chunk_size (int) – The size of each chunk to upload.

  • progress (bool) – Whether to show a progress bar.

upload_multiplexed(path, chunk_size=1000000, progress=False, use_base64=False, retries=0)

Uploads a multiplexed reads file to the server.

Parameters:
  • path (str) – The path to the multiplexed reads file.

  • chunk_size (int) – The size of each chunk to upload.

  • progress (bool) – Whether to show a progress bar.

  • retries (int) – The number of times to retry the upload.

Return type:

dict

upload_sample(name, path1, path2=None, chunk_size=1000000, progress=False, metadata=None, use_base64=False)

Uploads a sample to the server.

Parameters:
  • name (str) – The name of the sample.

  • path1 (str) – The path to the first file.

  • path2 (str | None) – The path to the second file if sample is paired-end.

  • chunk_size (int) – The size of each chunk to upload.

  • progress (bool) – Whether to show a progress bar.

  • metadata (dict | None) – The metadata to attach to the sample. This must include a sample_type key. May also include project and organism keys, which are extracted and passed as dedicated parameters to the v2 client.

Return type:

dict