Object cache driver using the "DBI" package interface for storage.
This means that storr can work for any supported "DBI" driver
(though practically this works only for SQLite and Postgres until
some MySQL dialect translation is done). To connect, you must
provide the driver object (e.g., RSQLite::SQLite()
,
or RPostgres::Postgres()
as the first argument.
storr_dbi(tbl_data, tbl_keys, con, args = NULL, binary = NULL, hash_algorithm = NULL, default_namespace = "objects") driver_dbi(tbl_data, tbl_keys, con, args = NULL, binary = NULL, hash_algorithm = NULL)
tbl_data | Name for the table that maps hashes to values |
---|---|
tbl_keys | Name for the table that maps keys to hashes |
con | Either A DBI connection or a DBI driver (see example) |
args | Arguments to pass, along with the driver, to
|
binary | Optional logical indicating if the values should be
stored in binary. If possible, this is both (potentially
faster) and more accurate. However, at present it is supported
only under very recent DBI and RSQLite packages, and for no
other DBI drivers, and is not actually any faster. If not given
(i.e., |
hash_algorithm | Name of the hash algorithm to use. Possible
values are "md5", "sha1", and others supported by
|
default_namespace | Default namespace (see
|
Because the DBI package specifies a uniform interface for the using DBI compliant databases, you need only to provide a connection object. storr does not do anything to help create the connection object itself.
The DBI storr driver works by using two tables; one mapping keys to hashes, and one mapping hashes to values. Two table names need to be provided here; they must be different and they should be treated as opaque (don't use them for anything else - reading or writing). Apart from that the names do not matter.
Because of treatment of binary data by the underlying DBI drivers, binary serialistion is not any faster (and might be slightly slower than) string serialisation, in contrast with my experience with other backends.
storr uses DBI's "prepared query" approach to safely interpolate
keys, namespaces and values into the database - this should allow
odd characters without throwing SQL syntax errors. Table names
can't be interpolated in the same way - these storr simply quotes,
but validates beforehand to ensure that tbl_data
and
tbl_keys
do not contain quotes.
Be aware that $destroy()
will close the connection to the
database.
if (requireNamespace("RSQLite", quietly = TRUE)) { st <- storr::storr_dbi("tblData", "tblKeys", RSQLite::SQLite(), ":memory:") # Set some data: st$set("foo", runif(10)) st$list() # And retrieve the data: st$get("foo") # These are the data tables; treat these as read only DBI::dbListTables(st$driver$con) # With recent RSQLite you'll get binary storage here: st$driver$binary # The entire storr part of the database can be removed using # "destroy"; this will also close the connection to the database st$destroy() # If you have a connection you want to reuse (which will the the # case if you are using an in-memory SQLite database for # multiple things within an application) it may be useful to # pass the connection object instead of the driver: con <- DBI::dbConnect(RSQLite::SQLite(), ":memory:") st <- storr::storr_dbi("tblData", "tblKeys", con) st$set("foo", runif(10)) # You can then connect a different storr to the same underlying # storage st2 <- storr::storr_dbi("tblData", "tblKeys", con) st2$get("foo") }#> [1] 0.87460066 0.17494063 0.03424133 0.32038573 0.40232824 0.19566983 #> [7] 0.40353812 0.06366146 0.38870131 0.97554784