|
private | checkMapField (string k, reference fh) |
| perform per-field pre-processing on the passed map in the constructor More...
|
|
nothing | commit () |
| flushes any queued data and commits the transaction
|
|
| constructor (SqlUtil::Table target, hash mapv, *hash opts) |
| builds the object based on a hash providing field mappings, data constraints, and optionally custom mapping logic More...
|
|
| constructor (SqlUtil::AbstractTable target, hash mapv, *hash opts) |
| builds the object based on a hash providing field mappings, data constraints, and optionally custom mapping logic More...
|
|
| destructor () |
| throws an exception if there is data pending in the block cache More...
|
|
| discard () |
| discards any buffered batched data; this method should be called after using the batch APIs (queueData()) and an error occurs More...
|
|
private | error (string fmt) |
| prepends the datasource description to the error string and calls Mapper::error()
|
|
private | error2 (string ex, string fmt) |
| prepends the datasource description to the error description and calls Mapper::error2()
|
|
*hash | flush () |
| flushes any remaining batched data to the database; this method should always be called before committing the transaction or destroying the object More...
|
|
private hash | flushIntern () |
| flushes queued data to the database
|
|
Qore::SQL::AbstractDatasource | getDatasource () |
| returns the AbstractDatasource object associated with this object
|
|
*list | getReturning () |
| returns a list argument for the SqlUtil "returning" option, if applicable
|
|
SqlUtil::AbstractTable | getTable () |
| returns the underlying SqlUtil::AbstractTable object
|
|
string | getTableName () |
| returns the table name
|
|
private | init (hash mapv, *hash opts) |
| common constructor initialization
|
|
hash | insertRow (hash rec) |
| inserts a row into the target table based on a mapped input record; does not commit the transaction More...
|
|
deprecated hash | insertRowNoCommit (hash rec) |
| Plain alias to insertRow(). Obsolete. Do not use.
|
|
| logOutput (hash h) |
| ignore logging from Mapper since we may have to log sequence values; output logged manually in insertRow()
|
|
private | mapFieldType (string key, hash m, reference v, hash rec) |
| performs type handling
|
|
hash | optionKeys () |
| returns a list of valid constructor options for this class (can be overridden in subclasses) More...
|
|
*hash | queueData (hash rec, *hash crec) |
| inserts a row (or a set of rows, in case a hash of lists is passed) into the block buffer based on a mapped input record; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the "insert_block" option; does not commit the transaction More...
|
|
*hash | queueData (AbstractIterator iter, *hash crec) |
| inserts a set of rows (list of hashes) into the block buffer based on a mapped input record; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the "insert_block" option; does not commit the transaction More...
|
|
private *hash | queueDataIntern (hash rec) |
| inserts a row into the block buffer based on a mapped input record; does not commit the transaction More...
|
|
nothing | rollback () |
| discards any queued data and rolls back the transaction
|
|
| setRowCode (*code rowc) |
| sets a closure or call reference that will be called when data has been sent to the database and all output data is available; must accept a hash argument that represents the data written to the database including any output arguments. This code will be reset, once the transaction is commited. More...
|
|
hash | validKeys () |
| returns a list of valid field keys for this class (can be overridden in subclasses) More...
|
|
hash | validTypes () |
| returns a list of valid field types for this class (can be overridden in subclasses) More...
|
|
provides an inbound data mapper to a Table target
builds the object based on a hash providing field mappings, data constraints, and optionally custom mapping logic
The target table is also scanned using SqlUtil and column definitions are used to update the target record specification, also if there are any columns with NOT NULL constraints and no default value, mapping, or constant value, then a MAP-ERROR exception is thrown
- Example:
2 "id": (
"sequence":
"seq_inventory_example"),
3 "store_code":
"StoreCode",
4 "product_code":
"ProductCode",
5 "product_desc":
"ProductDescription",
7 "available":
"Available",
8 "in_transit":
"InTransit",
9 "status": (
"constant":
"01"),
10 "total": int sub (any x, hash rec) {
return rec.Available.toInt() + rec.Ordered.toInt() + rec.InTransit.toInt(); },
13 InboundTableMapper mapper(table, DbMapper);
- Parameters
-
target | the target table object |
mapv | a hash providing field mappings; each hash key is the name in lower case of the output column in the target table; each value is either True (meaning no translations are done; the data is copied 1:1) or a hash describing the mapping; see TableMapper Specification Format for detailed documentation for this option |
opts | an optional hash of options for the mapper; see Mapper Options for a description of valid mapper options plus the following options specific to this object:
"unstable_input" : set this option to True (default False) if the input passed to the mapper is unstable, meaning that different hash keys or a different hash key order can be passed as input data in each call to insertRow(); if this option is set, then insert speed will be reduced by about 33%; when this option is not set, an optimized insert approach is used which allows for better performance
"insert_block" : for DB drivers supporting bulk DML (for use with the queueData(), flush(), and discard() methods), the number of rows inserted at once (default: 1000, only used when "unstable_input" is False) and bulk inserts are supported in the table object; see InboundTableMapper Bulk Insert API for more information
"rowcode" : a per-row Closures or Call References for batch inserts; this must take a single hash argument and will be called for every row after a bulk insert; the hash argument representing the row inserted will also contain any output values if applicable
|
- Exceptions
-
MAP-ERROR | the map hash has a logical error (ex: "trunc" key given without "maxlen" , invalid map key) |
- See Also
- setRowCode()
builds the object based on a hash providing field mappings, data constraints, and optionally custom mapping logic
The target table is also scanned using SqlUtil and column definitions are used to update the target record specification, also if there are any columns with NOT NULL constraints and no default value, mapping, or constant value, then a MAP-ERROR exception is thrown
- Example:
2 "id": (
"sequence":
"seq_inventory_example"),
3 "store_code":
"StoreCode",
4 "product_code":
"ProductCode",
5 "product_desc":
"ProductDescription",
7 "available":
"Available",
8 "in_transit":
"InTransit",
9 "status": (
"constant":
"01"),
10 "total": int sub (any x, hash rec) {
return rec.Available.toInt() + rec.Ordered.toInt() + rec.InTransit.toInt(); },
13 InboundTableMapper mapper(table, DbMapper);
- Parameters
-
target | the target table object |
mapv | a hash providing field mappings; each hash key is the name of the output field; each value is either True (meaning no translations are done; the data is copied 1:1) or a hash describing the mapping; see TableMapper Specification Format for detailed documentation for this option |
opts | an optional hash of options for the mapper; see Mapper Options for a description of valid mapper options plus the following options specific to this object:
"unstable_input" : set this option to True (default False) if the input passed to the mapper is unstable, meaning that different hash keys or a different hash key order can be passed as input data in each call to insertRow(); if this option is set, then insert speed will be reduced by about 33%; when this option is not set, an optimized insert approach is used which allows for better performance
"insert_block" : for DB drivers supporting bulk DML (for use with the queueData(), flush(), and discard() methods), the number of rows inserted at once (default: 1000, only used when "unstable_input" is False) and bulk inserts are supported in the table object; see InboundTableMapper Bulk Insert API for more information
"rowcode" : a per-row Closures or Call References for batch inserts; this must take a single hash argument and will be called for every row after a bulk insert; the hash argument representing the row inserted will also contain any output values if applicable
|
- Exceptions
-
MAP-ERROR | the map hash has a logical error (ex: "trunc" key given without "maxlen" , invalid map key) |
TABLE-ERROR | the table includes a column using an unknown native data type |
- See Also
- setRowCode()
*hash TableMapper::InboundTableMapper::queueData |
( |
hash |
rec, |
|
|
*hash |
crec |
|
) |
| |
inserts a row (or a set of rows, in case a hash of lists is passed) into the block buffer based on a mapped input record; the block buffer is flushed to the DB if the buffer size reaches the limit defined by the "insert_block"
option; does not commit the transaction
- Example:
1 on_success table_mapper.commit();
2 on_error table_mapper.rollback();
4 on_success table_mapper.flush();
5 on_error table_mapper.discard();
7 map table_mapper.queueData($1), data.iterator();
Data is only inserted if the block buffer size reaches the limit defined by the "insert_block"
option, in which case this method returns all the data inserted. In case the mapped data is only inserted into the cache, no value is returned.
- Parameters
-
rec | the input record or record set in case a hash of lists is passed |
crec | an optional simple hash of data to be added to each row |
- Returns
- if batch data was inserted then a hash (columns) of lists (row data) of all data inserted and potentially returned (in case of sequences) from the database server is returned
- Note
- make sure to call flush() before committing the transaction or discard() before rolling back the transaction or destroying the object when using this method
- flush() or discard() needs to be executed for each mapper used in the block when using multiple mappers whereas the DB transaction needs to be committed or rolled back once per datasource
- this method and batched inserts in general cannot be used when the
"unstable_input"
option is given in the constructor
- if the
"insert_block"
option is set to 1, then this method simply calls insertRow()
- if an error occurs flushing data, the count is reset by calling Mapper::resetCount()
- See Also
-
- Exceptions
-
MAPPER-BATCH-ERROR | this exception is thrown if this method is called when the "unstable_input" option was given in the constructor |
MISSING-INPUT | a field marked mandatory is missing |
STRING-TOO-LONG | a field value exceeds the maximum value and the 'trunc' key is not set |
INVALID-NUMBER | the field is marked as numeric but the input value contains non-numeric data |