Please use api-job and api-perform instead.
Usage
insert_query_job(
query,
project,
destination_table = NULL,
default_dataset = NULL,
create_disposition = "CREATE_IF_NEEDED",
write_disposition = "WRITE_EMPTY",
use_legacy_sql = TRUE,
...
)
Arguments
- query
SQL query string
- destination_table
(optional) destination table for large queries, either as a string in the format used by BigQuery, or as a list with
project_id
,dataset_id
, andtable_id
entries- default_dataset
(optional) default dataset for any table references in
query
, either as a string in the format used by BigQuery or as a list withproject_id
anddataset_id
entries- create_disposition
behavior for table creation. defaults to
"CREATE_IF_NEEDED"
, the only other supported value is"CREATE_NEVER"
; see the API documentation for more information- write_disposition
behavior for writing data. defaults to
"WRITE_EMPTY"
, other possible values are"WRITE_TRUNCATE"
and"WRITE_APPEND"
; see the API documentation for more information- use_legacy_sql
(optional) set to
FALSE
to enable BigQuery's standard SQL.
Value
a job resource list, as documented at https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs
See also
API documentation for insert method: https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/insert
Other jobs:
get_job()
,
insert_extract_job()
,
insert_upload_job()
,
wait_for()