Skip to main content
You must pass parameter query_id. Returns the latest execution id and results (in CSV) of that latest run, regardless of the job id/run or if it is run in the app or the api. The query specified must either be public or a query you have ownership of (you or a team you belong to have ownership).
This endpoint does NOT trigger an execution but does consume credits through datapoints.
  • Dune internally has a maximum query result size limit (which currently is 8GB, but subject to increase in the future). If your query yields more than 8GB of data, the result will be truncated in storage. In such cases, pulling the result data (using pagination) but without specifying allow_partial_results set to true will trigger an error message: “error”: “Partial Result, please request with ‘allows_partial_results=true’”. If you wish to retrieve partial results, you can pass the parameter allow_partial_results=true. But please make sure you indeed want to fetch the truncated result.
  • We recommend reading about Pagination to get the most out of the API and handle large results.
If you are using the Python SDK, you can directly executes and fetches result in one function call, like below:
Python SDK

import dotenv, os, json
from dune_client.types import QueryParameter
from dune_client.client import DuneClient
from dune_client.query import QueryBase

# change the current working directory where .env file lives
os.chdir("/Users/abc/local-Workspace/python-notebook-examples")
# load .env file
dotenv.load_dotenv(".env")
# setup Dune Python client
dune = DuneClient.from_env()

Use run_query to get result in JSON, run_query_csv to get result in CSV format, and run_query_dataframe to get result in Pandas dataframe

result = dune.run_query(
    query=query # pass in query to run 
    , performance = 'large' # optionally define which tier to run the execution on (default is "medium")
    )


curl --request GET \
  --url https://api.dune.com/api/v1/query/{query_id}/results/csv