SQL - Other

Convert varbinary column to base64

SELECT
   Id,
   FileInBinary, --the varbinary value we want converted to base64
   CAST('' AS XML).value('xs:base64Binary(sql:column("FileInBinary"))', 'varchar(max)') AS FileInBase64
FROM Files

COUNT(*) vs COUNT(1) vs COUNT(column_name)

COUNT(*)

  • It counts all the rows including NULLs.
  • When * is used as an argument, it simply counts the total number of rows including the NULLs.


COUNT(1)

  • It counts all the rows including NULLs.
  • With COUNT(1), there is a misconception that it counts records from the first column. What COUNT(1) really does is that it replaces all the records you get from query result with the value 1 and then counts the rows meaning it even replaces a NULL with 1 meaning it takes NULLs into consideration while counting.


COUNT(column_name)

  • It counts all the rows but not NULLs.
  • When a column name is used as an argument, it simply counts the total number of rows excluding the NULLs meaning it will not take NULLs into consideration.

Creating Indexes for Performance

Indexes speed up data retrieval by optimizing how SQL searches for rows.


CREATE INDEX idx_user_name ON users (user_name);


CURRENT_USER, SESSION_USER, SYSTEM_USER and USER_NAME()

SELECT SESSION_USER as [SESSION_USER];
SELECT CURRENT_USER as [CURRENT_USER];
SELECT SYSTEM_USER as [SYSTEM_USER];
SELECT USER_NAME as [USER_NAME];

Data Encryption

SQL supports encryption functions to secure sensitive data at rest or in transit.


UPDATE users
SET password = AES_ENCRYPT('mypassword', 'encryption_key')
WHERE user_id = 1;


Data Migration with INSERT ... SELECT

This technique migrates data between tables or databases with transformation.


INSERT INTO new_table (id, name)
SELECT id, name FROM old_table;


Data Types for Date and Time

  1. Time - hh:mm:ss[.nnnnnnn]
  2. Date - YYYY-MM-DD
  3. SmallDateTime - YYYY-MM-DD hh:mm:ss
  4. DateTime - YYYY-MM-DD hh:mm:ss[.nnn]
  5. DateTime2 - YYYY-MM-DD hh:mm:ss[.nnnnnnn]
  6. DateTimeOffset - YYYY-MM-DD hh:mm:ss[.nnnnnnn] [+|-]hh:mm

Database Indexing

Proper indexing is crucial for query optimization.


-- Identify missing indexes
EXPLAIN SELECT * FROM your_table WHERE indexed_column = 'value';

-- Create missing indexes
CREATE INDEX index_name ON your_table(indexed_column);

Database Logs

Most database systems provide logs that record query execution times. These logs can be a valuable source of information to identify slow queries. You can configure logging levels and output formats according to your database system.


-- Enable slow query logging in MySQL
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1; -- Define the threshold (in seconds) for slow queries

-- View slow query log
SHOW VARIABLES LIKE 'slow_query_log%';
SHOW VARIABLES LIKE 'long_query_time';
SHOW VARIABLES LIKE 'slow_query_log_file';

Database Tuning

Adjust database configuration settings like memory allocation, cache sizes, and connection pool settings to better suit your application’s workload.


-- Adjust MySQL buffer pool size
SET GLOBAL innodb_buffer_pool_size = 1G;

Declare Array

DECLARE ARRAY_VARIABLE = ARRAY[ 'data1', 'data2', 'data3' ];

DENSE_RANK()

This function returns the rank of each row within a result set partition, with no gaps in the ranking values. The rank of a specific row is one plus the number of distinct rank values that come before that specific row.


SELECT
	product_id,
	product_name,
	list_price,
	DENSE_RANK () OVER ( 
		ORDER BY list_price DESC
	) price_rank 
FROM
	production.products;