Please use api-job instead.
Usage
insert_upload_job(
project,
dataset,
table,
values,
billing = project,
create_disposition = "CREATE_IF_NEEDED",
write_disposition = "WRITE_APPEND",
...
)
Arguments
- project, dataset
Project and dataset identifiers
- table
name of table to insert values into
- values
data frame of data to upload
- billing
project ID to use for billing
- create_disposition
behavior for table creation if the destination already exists. defaults to
"CREATE_IF_NEEDED"
, the only other supported value is"CREATE_NEVER"
; see the API documentation for more information- write_disposition
behavior for writing data if the destination already exists. defaults to
"WRITE_APPEND"
, other possible values are"WRITE_TRUNCATE"
and"WRITE_EMPTY"
; see the API documentation for more information- ...
Additional arguments passed on to the underlying API call. snake_case names are automatically converted to camelCase.
See also
Google API documentation: https://cloud.google.com/bigquery/docs/loading-data
Other jobs:
get_job()
,
insert_extract_job()
,
insert_query_job()
,
wait_for()
Examples
if (FALSE) {
list_datasets(bq_test_project)
list_tables("193487687779", "houston")
job <- insert_upload_job("193487687779", "houston", "mtcars", mtcars)
wait_for(job)
list_tables("193487687779", "houston")
delete_table("193487687779", "houston", "mtcars")
}