Searched updates and deletes can be parallelized as they are written. However, inserts in loops are not as obviously parallelizable. Therefore insert has an option for queueing multiple inserts into a queue for partitioned parallel execution.

The format of the statement is:

INSERT INTO table [KEY <key>] [OPTION (INTO <daq>) <column list> <values or select>

The DAQ is a distributed async queue object, see section on calling procedures in cluster. Specifying the INTO daq option partitions the insert, determining where each key should go and buffers the operation into the DAQ. When the result of the DAQ is fetched, all the buffered operations are sent in parallel, as a single message per node concerned. This is easily tens or hundreds of times more efficient than inserting row by row. The transactional behavior is controlled by the DAQ, see the section on calling procedures in cluster. Inserts can mix with other types of operations in the DAQ. If the DAQ is transactional, any failed insert, as in the case of uniqueness violation, will make the transaction uncommittable, preserving integrity.

Example

create procedure ct_fill_daq (in n1 int, in n2 int, in st int := 1)
{
 declare daq any;
 declare ct int;
 daq := daq (1);
 for (ct := n1; ct < n2; ct:=ct+st)
 {
 insert into CT option (into daq) (ROW_NO, DT) values (ct, cast (ct as varchar) || ' xxxxxxxxxxxxxxx');
 }
 while (daq_next (daq));
}