Inline BLOB / BINARY data types in SQL / JDBC
There probably isn't a JDBC escape syntax, so I searched around a bit and found and successfully tested the following:
SQL Server, Sybase ASE, Sybase SQL Anywhere
INSERT INTO lob_table VALUES (0x01FF);
DB2
-- Use a blob constructor. This is not needed for VARCHAR FOR BIT DATA types
INSERT INTO lob_table VALUES (blob(X'01FF'));Derby, H2, HSQLDB, Ingres, MySQL, SQLite
INSERT INTO lob_table VALUES (X'01FF');
Oracle
-- As mentioned by a_horse_with_no_name, keep in mind the relatively low
-- limitation of Oracle's VARCHAR types to hold only 4000 bytes!
INSERT INTO lob_table VALUES (hextoraw('01FF'));Postgres
-- There is also hex encoding as of Postgres 9.0
-- The explicit cast is important, though
INSERT INTO lob_table VALUES (E'\\001\\377'::bytea);See A.H.'s answer for more details about Postgres' hex encoding
SQL Standard
-- SQL actually defines binary literals as such
-- (as implemented by DB2, Derby, H2, HSQLDB, Ingres, MySQL, SQLite):
<binary string literal> ::=
X <quote> [ <space>... ]
[ { <hexit> [ <space>... ] <hexit> [ <space>... ] }... ] <quote>
<hexit> ::=
<digit> | A | B | C | D | E | F | a | b | c | d | e | f
An attempt was made to get a data value of type 'java.sql.Blob' from a data value of type 'CHAR () FOR BIT DATA'
Using java.sql.Blob
for JDBC type BINARY
is wrong from a strict JDBC API interpretation, no matter which driver you use. Using getBytes
is always the appropriate method for this type (getBinaryStream
should also work). See also the JDBC 4.3 specification, appendix B, specifically table B-3, which defines the default Java type for JDBC type BINARY
as byte[]
, and B-5, which only defines a mapping from a java.sql.Blob
to JDBC type BLOB
, and B-6 which only defines support for getBlob
for columns of JDBC type BLOB
.
To be clear, JDBC type BINARY
- usually - maps to/from the SQL type BINARY
(Derby's equivalent is called CHAR FOR BIT DATA
), which is a fixed-length byte string/array type. Whereas a SQL (and JDBC) BLOB
is a type holding binary data of varying length (usually of unlimited length (or at least extremely large)) through a so-called locator object. BLOB
data is generally stored off-row (the row only contains a pointer or id of the actual data), whereas BINARY
(and VARBINARY
) are usually stored inline. A java.sql.Blob
is the Java representation of that locator object, providing an indirection to access the off-row data of the blob.
However, some drivers are more lenient, either for backwards compatibility with earlier versions of the driver, or for maximum portability (e.g. bridge implementation differences between different database systems). For example, some drivers may define their (SQL) BLOB
types as JDBC LONGVARBINARY
instead (which supports getBytes
and getBinaryStream
), but may then also define support for getBlob
for cases where users assume "I use a SQL BLOB
, so I should use getBlob
".
So, in short, if the column is identified as JDBC type BINARY
, then use getBytes
(or getInputStream
, however if you want to get a UUID, which is always 16 bytes, then using getBytes
makes more sense). If the row is identified as JDBC type BLOB
, then use getBlob
(though some drivers may also support getBytes
and getInputStream
, this is not specified in JDBC).
Store binary data structure into BLOB columns in C
When working with binary objects text protocol (mysql_query/mysql_real_query) is not the best option, since special characters like '\0' are not supported. That means you have to allocate additional buffer (2 * (size of blob) + 1) for transforming the binary object.
Solution 1: mysql_real_escape()
char *buffer = malloc(sizeof(struct Dictionary) * 2 + 1);
mysql_real_escape_string(conn, buffer, &st_dic, sizeof(struct Dictionary));
sprintf(str, "INSERT INTO myTables (id, blob_field) VALUES (0, '%s')", buffer);
if (mysql_query(conn, str))
{
/* Error handling */
}
Solution 2: mysql_hex_string()
char *buffer = malloc(sizeof(struct Dictionary) * 2 + 1);
mysql_hex_string(buffer, &st_dic, sizeof(struct Dictionary));
sprintf(str, "INSERT INTO myTables (id, blob_field) VALUES (0, X'%s')", buffer);
if (mysql_query(conn, str))
{
/* Error handling */
}
Alternative:
A better solution is to use prepared statements which use the binary protocol:
Code without error handling:
MYSQL_BIND bind;
MYSQL_STMT *stmt;
stmt= mysql_stmt_init(conn);
mysql_stmt_prepare(stmt, INSERT INTO myTables (id, blob_field) VALUES (0, ?)", 1);
memset(&bind, 0, sizeof(MYSQL_BIND));
bind.buffer_type= MYSQL_TYPE_BLOB;
bind.buffer= &st_dic;
bind.buffer_length= sizeof(struct Dictionary);
mysql_stmt_bind_param(stmt, &bind);
mysql_stmt_execute(stmt);
Unable to create BLOB/Binary types with LoopBack 4
I'll show the answer for this question that I made.
Just define your model
with the following info:
@property({
required: true,
mysql: {
columnName: 'application_id',
dataType: 'VARBINARY',
dataLength: 16,
nullable: 'NO'
}
})
application_id: string;
It worked like a charm for me :)
Thank you all!
insert varbinary value from hex literal in Microsoft SQL Server
The documentation for the CONVERT function says that the default "style" for binary types is 0
:
Translates ASCII characters to binary bytes, or binary bytes to ASCII characters. Each character or byte is converted 1:1.
So,
SELECT CONVERT(VARBINARY, '0x00173672') AS foo;
returns
foo
--------------------------------------------------------------
0x30783030313733363732
which are the ASCII byte values of the hex literal, not the hex bytes themselves. In order for CONVERT to interpret the hex literal, you need to use style 1
, i.e.
SELECT CONVERT(VARBINARY, '0x00173672', 1) AS foo;
which returns
foo
--------------------------------------------------------------
0x00173672
Oracle 10: Using HEXTORAW to fill in blob data
Apparently you can exceed these limits if you use PL/SQL. It doesn't work if you do the HEXTORAW
within the UPDATE
statement directly, either--it needs to be done in a separate statement, like this:
DECLARE
buf RAW(4000);
BEGIN
buf := HEXTORAW('C2B97041074...lots of hex...0CC00CD00');
UPDATE MyTable
SET blobData = buf
WHERE ID = 462;
END;
For the life of me I'll never understand some of Oracle's limitations. It's like everything is its own little special case.
How to configure MySQL 5.6 LONGBLOB for large binary data
The reason for this issue is a change in MySQL 5.6.20 as one could read in the change log:
As a result of the redo log BLOB write limit introduced for MySQL 5.6, the innodb_log_file_size setting should be 10 times larger than the largest BLOB data size found in the rows of your tables plus the length of other variable length fields (VARCHAR, VARBINARY, and TEXT type fields). No action is required if your innodb_log_file_size setting is already sufficiently large or your tables contain no BLOB data.
To resolve your issue you have to increase the value of the innodb_log_file_size option in your my.ini
below the [mysqld]
section. Its default value is 48M
. Setting it to
[mysqld]
innodb_log_file_size=256M
helped in my case.
Be careful when changing the value of innodb_log_file_size
that you do this safely:
- You need to shut the server down cleanly and normally.
- Move away
(don’t delete) the log files, which are named ib_logfile0,
ib_logfile1, and so on.- Check the error log to ensure there was no
problem shutting down.- Then restart the server and watch the error log
output carefully. You should see InnoDB print messages saying that the
log files don’t exist. It will create new ones and then start.- At this
point you can verify that InnoDB is working, and then you can delete
the old log files.
Related Topics
SQL - Best Practice for a Friendship Table
Between Clause Versus <= and >=
Combination of 'Like' and 'In' Using T-Sql
SQL Delete Rows Based on Another Table
Return Only One Row from the Right-Most Table for Every Row in the Left-Most Table
How to Count the Number of Columns in a Table Using SQL
How to Split the Results of a Select Query into Two Equal Halfs
How to Search All Text Fields in a Db for Some Substring with T-Sql
Mysql: Which to Use When: Drop Table, Truncate Table, Delete from Table
Cte to Traverse Back Up a Hierarchy
SQL Server Xp_Delete_File Not Deleting Files
Using Same Column Multiple Times in Where Clause
Unexpected Eof Encountered in Bcp
How to Select Top 1 and Ordered by Date in Oracle SQL
How to Use an Oracle Associative Array in a SQL Query