MySQL Query Gone Wild

MySQL query gone wild?

It's a known issue regarding subtraction from unsigned integers.

Subtraction between integer values, where one is of type UNSIGNED, produces an unsigned
result by default. If the result would otherwise have been negative, it becomes the
maximum integer value. If the NO_UNSIGNED_SUBTRACTION SQL mode is enabled, the result is
negative.

Reference: Numeric Types

Match '%' sign when searching in MySQL database

The default escape character is \. So just prefix % with a \ as: \%:

The manual clearly says:

To test for literal instances of a
wild-card character, precede it by the
escape character. If you do not
specify the ESCAPE character, “\” is
assumed.

Search for % in Stack%Overflow:

mysql> select 'Stack%Overflow' like '%\%%';
+------------------------------+
| 'Stack%Overflow' like '%\%%' |
+------------------------------+
| 1 | <----- Found
+------------------------------+
1 row in set (0.00 sec)

Search for % in StackOverflow:

mysql> select 'StackOverflow' like '%\%%';
+-----------------------------+
| 'StackOverflow' like '%\%%' |
+-----------------------------+
| 0 | <----- Not Found
+-----------------------------+
1 row in set (0.00 sec)

EDIT:

If you are calling this query from PHP, you'll have to use \\. This is because even PHP uses \ as the escape character. So make MySQL get a \ you need to have \\ in PHP.

How can I search (case-insensitive) in a column using LIKE wildcard?

SELECT  *
FROM trees
WHERE trees.`title` COLLATE UTF8_GENERAL_CI LIKE '%elm%'

Actually, if you add COLLATE UTF8_GENERAL_CI to your column's definition, you can just omit all these tricks: it will work automatically.

ALTER TABLE trees 
MODIFY COLUMN title VARCHAR(…) CHARACTER
SET UTF8 COLLATE UTF8_GENERAL_CI.

This will also rebuild any indexes on this column so that they could be used for the queries without leading '%'

Decrement in mysql goes past zero

`AND value > 0`

Hard to debug Mysql server has gone away

The problem is that the connection is timing out. This is not a setting that you can change at the ODBC level. My MySQL provider had set the timeout at 30 seconds (you can find out what yours is set at my running the PassThrough query "SELECT @@wait_timeout"). I got round it by polling the connection every 10 seconds with a simple form. That keeps the connection alive. Make sure you have an AutoExec macro which opens the form.

Private Sub Form_Open(Cancel As Integer)
Me.TimerInterval = 1000 ' 10 seconds
End Sub

Private Sub Form_Timer()
Me.Requery
End Sub

MySQL query is slow only when using ORDER BY field DESC and LIMIT

  • "Avg_row_length: 767640"; lots of MEDIUMTEXT. A row is limited to about 8KB; overflow goes into "off-record" blocks. Reading those blocks takes extra disk hits.

  • SELECT * will reach for all those fat columns. The total will be about 50 reads (of 16KB each). This takes time.

  • (Exp 4) SELECT a,b,c,d ran faster because it did not need to fetch all ~50 blocks per row.

  • Your secondary index, (systemcreated,referrersiteid,__id), -- only the first column is useful. This is because of systemcreated LIKE 'xxx%'. This is a "range". Once a range is hit, the rest of the index is ineffective. Except...

  • "Index hints" (USE INDEX(...)) may help today but may make things worse tomorrow when the data distribution changes.

  • If you can't get rid of the wild cards in LIKE, I recommend these two indexes:

      INDEX(systemcreated)
    INDEX(referrersiteid)
  • The real speedup can occur by turning the query inside out. That is, find the 16 ids first, then go looking for all those bulky columns:

      SELECT  r2...   -- whatever you want
    FROM
    (
    SELECT __id
    FROM referrals
    WHERE `systemcreated` LIKE 'XXXXXX%'
    AND `referrersiteid` LIKE 'XXXXXXXXXXXX%'
    order by __id desc
    limit 16
    ) AS r1
    JOIN referrals r2 USING(__id)
    ORDER BY __id DESC -- yes, this needs repeating

And keep the 3-column secondary index that you have. Even if it must scan a lot more than 16 rows to find the 16 desired, it is a lot less bulky. This means that the subquery ("derived table") will be moderately fast. Then the outer query will still have 16 lookups -- possibly 16*50 blocks to read. The total number of blocks read will still be a lot less.

There is rarely a noticeable difference between ASC and DESC on ORDER BY.

Why does the Optimizer pick the PK instead of the seemingly better secondary index? The PK might be best, especially if the 16 rows are at the 'end' (DESC) of the table. But that would be a terrible choice if it had to scan the entire table without finding 16 rows.

Meanwhile, the wildcard test makes the secondary index only partially useful. The Optimizer makes a decision based on inadequate statistics. Sometimes it feels like the flip of a coin.

If you use my inside-out reformulation, then I recommend the follow two composite indexes -- The Optimizer can make a semi-intelligent, semi-correct choice between them for the derived table:

INDEX(systemcreated, referrersiteid, __id),
INDEX(referrersiteid, systemcreated, __id)

It will continue to say "filesort", but don't worry; it's only sorting 16 rows.

And, remember, SELECT * is hurting performance. (Though maybe you can't fix that.)

How to tell if a mySql event succeeded or not?

You may try to use EXIT HANDLER:

DROP EVENT IF EXISTS `myDB`.`myEvent`;

CREATE EVENT IF NOT EXISTS `myDB`.`myEvent`
ON SCHEDULE
EVERY 15 MINUTE
COMMENT 'Some comment'
DO
BEGIN
DECLARE EXIT HANDLER FOR SQLEXCEPTION, SQLWARNING
BEGIN
-- 1. Flag something, that should be tested in chained events
-- if they should be skipped. I.e.:

-- INSERT INTO `myDB`.`EventTable` (`event`, `failed`, `when`)
-- VALUES ('myEvent', TRUE, NOW());

-- OR

-- UPDATE
-- `myDB`.`EventTable`
-- SET
-- `failed` = TRUE,
-- `when` = NOW()
-- WHERE
-- `event` = 'myEvent';

-- 2. Quit.
END;

-- Do something, that may produce SQLEXCEPTION or SQLWARNING...
END;

How to use wildcard in sql query in golang

Two things to keep in mind about SQL syntax:

  • The argument to LIKE must be a string.

  • Parameter placeholders must not be inside SQL string delimiters.

So you need to make the argument to LIKE be a concatenated string of '%' and the value of your parameter.

There are two ways to do this.

The first way is to use CONCAT() to concatenate the literal string '%' and your parameter:

query := `
SELECT ...
FROM project p
INNER JOIN tag t ON p.project_id = t.project_id
LEFT JOIN build b ON p.project_id = b.project_id
AND b.status_ind = 'created'
AND p.status_ind = ?
AND p.name LIKE CONCAT('%', ?)
GROUP BY p.project_id, t.colorCode
ORDER BY create_timestamp DESC
LIMIT ?`

rows, err := db.Query(query, status, name, limit)

The second way is to just use a parameter placeholder, and concatenate the string in Go code before passing it to the query.

query := `
SELECT ...
FROM project p
INNER JOIN tag t ON p.project_id = t.project_id
LEFT JOIN build b ON p.project_id = b.project_id
AND b.status_ind = 'created'
AND p.status_ind = ?
AND p.name LIKE ?
GROUP BY p.project_id, t.colorCode
ORDER BY create_timestamp DESC
LIMIT ?`

namePattern := "%" + name

rows, err := db.Query(query, status, namePattern, limit)

I show in the example above using Go back-tick strings to make it easier to make multi-line strings that may contain literal quote characters. This is the way I write SQL queries in Go.

As far as I know, MySQL only supports positional query parameter placeholders, not named query parameter placeholders.



Related Topics



Leave a reply



Submit