Difference between revisions of "PostgreSQL"

From InfiniteERP Wiki
Jump to: navigation, search
(Created page with "= Introduction = = Database Maintenance and Monitoring script = == Monitor Index dan size for optimization == <pre> /* * Over optimizing is a thing We always our database...")
 
(Validation)
 
(24 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
= Introduction =
 
= Introduction =
 +
 +
= Database Development =
 +
 +
== Validation ==
 +
 +
=== Check: Columns with search reference and reference value null ===
 +
Setiap Column yang ada dalam daftar hasil query harus diperbaiki sesuai standard Openbravo Development
 +
<pre>
 +
 +
SELECT AD_COLUMN_ID AS referencekey_id, TABLENAME || ' - ' || COLUMNNAME AS RECORD_ID, 0 AS AD_ROLE_ID, NULL AS AD_USER_ID,
 +
  'The column ' || COLUMNNAME || ' in table ' || TABLENAME || ' is referenced as search and doesn''t have an associated reference value while having active updateable fields.' AS DESCRIPTION,
 +
  'Y' AS ISACTIVE, 0 AS AD_ORG_ID, 0 AS AD_CLIENT_ID, now() AS CREATED, 0 AS CREATEDBY, now() AS UPDATED, 0 AS UPDATEDBY
 +
FROM AD_COLUMN, AD_TABLE
 +
WHERE AD_REFERENCE_ID = '30'
 +
  AND AD_COLUMN.AD_TABLE_ID = AD_TABLE.AD_TABLE_ID
 +
  AND AD_REFERENCE_VALUE_ID IS NULL
 +
  AND AD_TABLE.ISACTIVE = 'Y'
 +
  AND AD_COLUMN.ISACTIVE = 'Y'
 +
  AND AD_COLUMN.ISPARENT = 'N'
 +
  --AND AD_COLUMN.ISUPDATEABLE = 'Y'
 +
  AND EXISTS (SELECT 1
 +
              FROM AD_FIELD, aD_TAB, AD_WINDOW
 +
              WHERE AD_FIELD.AD_COLUMN_ID = AD_COLUMN.AD_COLUMN_ID
 +
                AND AD_FIELD.AD_TAB_ID = AD_TAB.AD_TAB_ID
 +
                AND AD_WINDOW.AD_WINDOW_ID = AD_TAB.AD_WINDOW_ID
 +
                AND AD_FIELD.ISREADONLY = 'N'
 +
                AND AD_TAB.ISREADONLY = 'N'
 +
                AND AD_TAB.ISACTIVE = 'Y'
 +
                AND AD_FIELD.ISACTIVE = 'Y'
 +
                AND AD_WINDOW.ISACTIVE = 'Y'
 +
                AND ISDISPLAYED = 'Y')
 +
 +
</pre>
  
 
= Database Maintenance and Monitoring script =
 
= Database Maintenance and Monitoring script =
  
== Monitor Index dan size for optimization ==
+
== Monitor Index status dan size for optimization ==
 +
 
 +
Over optimizing is a thing
 +
We always our database to be performant, so in order to do that we keep things in memory/cache (see earlier) and we index things so we don’t have to scan everything on disk.
 +
But there is a trade-off when it comes to indexing your database.
 +
Each index the system has to maintain will slow down your write throughput on the database.
 +
This is fine when you do need to speed up queries, as long as they’re being utilized.
 +
If you added an index years ago, but something within your application changed and you no longer need it best to remove it.
 +
Postgres makes it simply to query for unused indexes so you can easily give yourself back some performance by removing them:
 +
 
 
<pre>
 
<pre>
 
/*
 
/*
Line 31: Line 73:
 
--drop index ret_gen_tsm_targetomzetline_tag_status_idx;
 
--drop index ret_gen_tsm_targetomzetline_tag_status_idx;
 
</pre>
 
</pre>
 +
 +
== Monitor Temporary Table ==
 +
<pre>
 +
/*
 +
* We all aware of the temp tables and many times developers are creating temp table for their ad-hoc testing purpose.
 +
* But unfortunately sometimes, they forget to drop unused temp tables, and temp tables keep active in those idle connections.
 +
* It also requires unnecessary utilization of disk space.
 +
*
 +
* DBAs are responsible for identifying unused temp tables and then inform to developers so that they can drop their unused temp tables.
 +
*/
 +
 +
SELECT
 +
n.nspname as SchemaName
 +
,c.relname as RelationName
 +
,CASE c.relkind
 +
WHEN 'r' THEN 'table'
 +
WHEN 'v' THEN 'view'
 +
WHEN 'i' THEN 'index'
 +
WHEN 'S' THEN 'sequence'
 +
WHEN 's' THEN 'special'
 +
END as RelationType
 +
,pg_catalog.pg_get_userbyid(c.relowner) as RelationOwner             
 +
,pg_size_pretty(pg_relation_size(n.nspname ||'.'|| c.relname)) as RelationSize
 +
FROM pg_catalog.pg_class c
 +
LEFT JOIN pg_catalog.pg_namespace n             
 +
                ON n.oid = c.relnamespace
 +
WHERE  c.relkind IN ('r','s')
 +
AND  (n.nspname !~ '^pg_toast' and nspname like 'pg_temp%')
 +
ORDER BY pg_relation_size(n.nspname ||'.'|| c.relname) DESC
 +
</pre>
 +
 +
== Monitor Dead Tupple ==
 +
<pre>
 +
/*
 +
* Under the covers Postgres is essentially a giant append only log.
 +
* When you write data it appends to the log, when you update data it marks the old record as invalid and writes a new one,
 +
* when you delete data it just marks it invalid. Later Postgres comes through and vacuums those dead records (also known as tuples).
 +
* All those unvacuumed dead tuples are what is known as bloat. Bloat can slow down other writes and create other issues.
 +
* Paying attention to your bloat and when it is getting out of hand can be key for tuning vacuum on your database.
 +
*/
 +
 +
WITH constants AS (
 +
  SELECT current_setting('block_size')::numeric AS bs, 23 AS hdr, 4 AS ma
 +
), bloat_info AS (
 +
  SELECT
 +
    ma,bs,schemaname,tablename,
 +
    (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,
 +
    (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2
 +
  FROM (
 +
    SELECT
 +
      schemaname, tablename, hdr, ma, bs,
 +
      SUM((1-null_frac)*avg_width) AS datawidth,
 +
      MAX(null_frac) AS maxfracsum,
 +
      hdr+(
 +
        SELECT 1+count(*)/8
 +
        FROM pg_stats s2
 +
        WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename
 +
      ) AS nullhdr
 +
    FROM pg_stats s, constants
 +
    GROUP BY 1,2,3,4,5
 +
  ) AS foo
 +
), table_bloat AS (
 +
  SELECT
 +
    schemaname, tablename, cc.relpages, bs,
 +
    CEIL((cc.reltuples*((datahdr+ma-
 +
      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta
 +
  FROM bloat_info
 +
  JOIN pg_class cc ON cc.relname = bloat_info.tablename
 +
  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = bloat_info.schemaname AND nn.nspname <> 'information_schema'
 +
), index_bloat AS (
 +
  SELECT
 +
    schemaname, tablename, bs,
 +
    COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,
 +
    COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols
 +
  FROM bloat_info
 +
  JOIN pg_class cc ON cc.relname = bloat_info.tablename
 +
  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = bloat_info.schemaname AND nn.nspname <> 'information_schema'
 +
  JOIN pg_index i ON indrelid = cc.oid
 +
  JOIN pg_class c2 ON c2.oid = i.indexrelid
 +
)
 +
SELECT
 +
  type, schemaname, object_name, bloat, pg_size_pretty(raw_waste) as waste
 +
FROM
 +
(SELECT
 +
  'table' as type,
 +
  schemaname,
 +
  tablename as object_name,
 +
  ROUND(CASE WHEN otta=0 THEN 0.0 ELSE table_bloat.relpages/otta::numeric END,1) AS bloat,
 +
  CASE WHEN relpages < otta THEN '0' ELSE (bs*(table_bloat.relpages-otta)::bigint)::bigint END AS raw_waste
 +
FROM
 +
  table_bloat
 +
    UNION
 +
SELECT
 +
  'index' as type,
 +
  schemaname,
 +
  tablename || '::' || iname as object_name,
 +
  ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric END,1) AS bloat,
 +
  CASE WHEN ipages < iotta THEN '0' ELSE (bs*(ipages-iotta))::bigint END AS raw_waste
 +
FROM
 +
  index_bloat) bloat_summary
 +
ORDER BY raw_waste DESC, bloat DESC
 +
</pre>
 +
 +
== Monitor Cache Hit Ratio ==
 +
 +
<pre>
 +
/*
 +
* monitor your cache hit ratio
 +
* URL : https://www.citusdata.com/blog/2019/03/29/health-checks-for-your-postgres-database/
 +
*/
 +
SELECT
 +
  sum(heap_blks_read) as heap_read,
 +
  sum(heap_blks_hit)  as heap_hit,
 +
  sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio
 +
FROM
 +
  pg_statio_user_tables;
 +
</pre>
 +
 +
== Monitoring Database Query Performance ==
 +
 +
<pre>
 +
/*
 +
* monitoring your database query performance.
 +
* It records a lot of valuable stats about which queries are run, how fast they return, how many times their run, etc.
 +
* Checking in on this set of queries regularly can tell you where is best to add indexes
 +
* or optimize your application so your query calls may not be so excessive.
 +
* URL : https://www.citusdata.com/blog/2019/03/29/health-checks-for-your-postgres-database/
 +
*
 +
* Jalankan : create extension pg_stat_statements; jika belum pernah dilakukan
 +
*/
 +
SELECT query,
 +
      calls,
 +
      ceil(total_time) as total_time,
 +
      ceil(total_time / calls) as time_per_call,
 +
      ceil(stddev_time) as stddev_time,
 +
      rows,
 +
      rows / calls as rows_per_call,
 +
      ceil(100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0)) AS hit_percent
 +
FROM pg_stat_statements
 +
WHERE query not similar to '%pg_%'
 +
--WHERE query similar to '%omzetline%'
 +
--and calls > 500
 +
--ORDER BY calls desc
 +
--ORDER BY total_time desc
 +
order by time_per_call desc
 +
--ORDER BY rows_per
 +
LIMIT 100;
 +
</pre>
 +
 +
== Locks held by Open Transactions (pg_locks) ==
 +
 +
<pre>
 +
/*
 +
* Script to find information about the Locks held by Open Transactions (pg_locks)
 +
*
 +
* PostgreSQL view pg_locks contains the main information about the all types of lock of Database Server.
 +
* It contains one row per active lockable object.
 +
* This script is very helpful to all PostgreSQL Database Administrators.
 +
*/
 +
 +
SELECT
 +
pl.pid AS ProcessID
 +
,psa.datname AS DatabaseName
 +
,psa.usename AS UserName
 +
,psa.application_name AS ApplicationName
 +
,ps.relname AS ObjectName
 +
,psa.query_start AS QueryStartTime
 +
,psa.state AS QueryState
 +
,psa.query AS SQLQuery
 +
,pl.locktype
 +
,pl.tuple AS TupleNumber
 +
,pl.mode AS LockMode
 +
,pl.granted -- True if lock is held, false if lock is awaited
 +
FROM pg_locks AS pl
 +
LEFT JOIN pg_stat_activity AS psa
 +
ON pl.pid = psa.pid
 +
LEFT JOIN pg_class AS ps
 +
ON pl.relation = ps.oid
 +
</pre>
 +
 +
== Check Trigger Status ==
 +
 +
<pre>
 +
SELECT * FROM pg_trigger
 +
WHERE tgisinternal=false and tgenabled = 'D'
 +
 +
--tgrelid 'public.c_order'::regclass and
 +
 +
-- enable trigger
 +
-- ALTER TABLE m_product ENABLE TRIGGER m_product_trg;
 +
 +
-- Disable trigger
 +
-- ALTER TABLE m_product DISABLE TRIGGER m_product_trg;
 +
</pre>
 +
 +
=== More advance ===
 +
 +
<pre>
 +
-- Check Disabled triggers
 +
 +
select trigger_name as referencekey_id,
 +
      null as record_id,
 +
      0 as ad_role_id,
 +
      null as ad_user_id,
 +
      'Constraint '||trigger_name||' on table '||table_name||' is not enabled' as description,
 +
      'Y' as isActive,
 +
        0 as ad_org_id,
 +
        0 as ad_client_id,
 +
        now() as created, 
 +
        0 as createdBy, 
 +
        now() as updated,
 +
        0 as updatedBy
 +
from user_triggers
 +
where status!='ENABLED'
 +
 +
-- enable trigger jika di perlukan
 +
 +
--ALTER TABLE c_orderline ENABLE TRIGGER oez_order_line_addcost;
 +
--ALTER TABLE m_product ENABLE TRIGGER m_product_trg;
 +
--ALTER TABLE c_conversion_rate ENABLE TRIGGER c_conversion_rate_trg;
 +
</pre>
 +
 +
== Monitor LIVE and DEAD Tupples ==
 +
 +
<pre>
 +
SELECT
 +
relname AS ObjectName
 +
,pg_stat_get_live_tuples(c.oid) AS LiveTuples
 +
,pg_stat_get_dead_tuples(c.oid) AS DeadTuples
 +
FROM pg_class c
 +
--where relname like 'c_%'
 +
order by pg_stat_get_dead_tuples(c.oid) desc;
 +
--order by pg_stat_get_live_tuples(c.oid) desc;
 +
</pre>
 +
 +
== Monitor LIVE and DEAD Tupples with AUTO VACUUM ==
 +
 +
<pre>
 +
SELECT
 +
    schemaname
 +
    ,relname
 +
    ,n_live_tup
 +
    ,n_dead_tup
 +
    ,last_autovacuum
 +
FROM pg_stat_all_tables
 +
ORDER BY n_dead_tup
 +
    /(n_live_tup
 +
      * current_setting('autovacuum_vacuum_scale_factor')::float8
 +
      + current_setting('autovacuum_vacuum_threshold')::float8)
 +
    DESC
 +
</pre>
 +
 +
= How to =
 +
== Terminate Idle Transactions for Postgres version 12 and previous ==
 +
 +
'''Background''':
 +
<br>
 +
since configuration '''idle_in_transaction_session_timeout''' on available at version 13 and above, here is the workaround solutions
 +
 +
In PostgreSQL version 10, you can use a SQL script to identify and terminate idle transactions based on your requirements.
 +
 +
Create below function:
 +
<pre>
 +
CREATE OR REPLACE FUNCTION oez_terminate_idle_sessions(idle_timeout_minutes integer) RETURNS void AS $$
 +
BEGIN
 +
    -- Terminate idle sessions that exceed the specified timeout in minutes
 +
    PERFORM pg_terminate_backend(pg_stat_activity.pid)
 +
    FROM pg_stat_activity
 +
    WHERE state = 'idle in transaction' AND now() - pg_stat_activity.query_start > interval '1 minute' * idle_timeout_minutes;
 +
END;
 +
$$ LANGUAGE plpgsql;
 +
</pre>
 +
 +
To use this function, you can call it with your desired idle timeout value. For example, to terminate sessions idle for more than 5 minutes, you can call the function like this:
 +
<pre>
 +
SELECT oez_terminate_idle_sessions(5); -- Terminate idle sessions idle for more than 5 minutes
 +
</pre>
 +
 +
== Convert Integer or numeric to row number==
 +
<pre>
 +
select
 +
    generate_series(1,tot_wo) as row
 +
from oez_gen_simple_wo sw
 +
</pre>
 +
 +
[[Category:Database_Maintenance]]

Latest revision as of 19:25, 24 January 2024

Introduction

Database Development

Validation

Check: Columns with search reference and reference value null

Setiap Column yang ada dalam daftar hasil query harus diperbaiki sesuai standard Openbravo Development


SELECT AD_COLUMN_ID AS referencekey_id, TABLENAME || ' - ' || COLUMNNAME AS RECORD_ID, 0 AS AD_ROLE_ID, NULL AS AD_USER_ID,
  'The column ' || COLUMNNAME || ' in table ' || TABLENAME || ' is referenced as search and doesn''t have an associated reference value while having active updateable fields.' AS DESCRIPTION,
  'Y' AS ISACTIVE, 0 AS AD_ORG_ID, 0 AS AD_CLIENT_ID, now() AS CREATED, 0 AS CREATEDBY, now() AS UPDATED, 0 AS UPDATEDBY
FROM AD_COLUMN, AD_TABLE
WHERE AD_REFERENCE_ID = '30'
  AND AD_COLUMN.AD_TABLE_ID = AD_TABLE.AD_TABLE_ID
  AND AD_REFERENCE_VALUE_ID IS NULL
  AND AD_TABLE.ISACTIVE = 'Y'
  AND AD_COLUMN.ISACTIVE = 'Y'
  AND AD_COLUMN.ISPARENT = 'N'
  --AND AD_COLUMN.ISUPDATEABLE = 'Y'
  AND EXISTS (SELECT 1
              FROM AD_FIELD, aD_TAB, AD_WINDOW
              WHERE AD_FIELD.AD_COLUMN_ID = AD_COLUMN.AD_COLUMN_ID
                AND AD_FIELD.AD_TAB_ID = AD_TAB.AD_TAB_ID
                AND AD_WINDOW.AD_WINDOW_ID = AD_TAB.AD_WINDOW_ID
                AND AD_FIELD.ISREADONLY = 'N'
                AND AD_TAB.ISREADONLY = 'N'
                AND AD_TAB.ISACTIVE = 'Y'
                AND AD_FIELD.ISACTIVE = 'Y'
                AND AD_WINDOW.ISACTIVE = 'Y'
                AND ISDISPLAYED = 'Y')

Database Maintenance and Monitoring script

Monitor Index status dan size for optimization

Over optimizing is a thing We always our database to be performant, so in order to do that we keep things in memory/cache (see earlier) and we index things so we don’t have to scan everything on disk. But there is a trade-off when it comes to indexing your database. Each index the system has to maintain will slow down your write throughput on the database. This is fine when you do need to speed up queries, as long as they’re being utilized. If you added an index years ago, but something within your application changed and you no longer need it best to remove it. Postgres makes it simply to query for unused indexes so you can easily give yourself back some performance by removing them:

/*
 * Over optimizing is a thing
	We always our database to be performant, so in order to do that we keep things in memory/cache (see earlier) and we index things so we don’t have to scan everything on disk. 
	But there is a trade-off when it comes to indexing your database. 
	Each index the system has to maintain will slow down your write throughput on the database. 
	This is fine when you do need to speed up queries, as long as they’re being utilized. 
	If you added an index years ago, but something within your application changed and you no longer need it best to remove it.
	Postgres makes it simply to query for unused indexes so you can easily give yourself back some performance by removing them:
 */
SELECT
  schemaname || '.' || relname AS table,
  indexrelname AS index,
  pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size,
  idx_scan as index_scans
FROM pg_stat_user_indexes ui
JOIN pg_index i ON ui.indexrelid = i.indexrelid
--WHERE NOT indisunique AND idx_scan < 50 AND pg_relation_size(relid) > 5 * 8192
--where relname = 'ret_gen_tsm_targetomzetline'
where relname in ('ret_gen_tsm_targetomzetgrow','ret_gen_tsm_targetomzetline','ret_tsmrekap_category','ret_gen_tsmtarget_category')
ORDER BY relname asc, pg_relation_size(i.indexrelid) / nullif(idx_scan, 0) DESC NULLS FIRST,
--idx_scan asc,
pg_relation_size(i.indexrelid) desc
;

--drop index ret_gen_tsm_targetomzetline_tag_status_idx;

Monitor Temporary Table

/*
 * We all aware of the temp tables and many times developers are creating temp table for their ad-hoc testing purpose.
 * But unfortunately sometimes, they forget to drop unused temp tables, and temp tables keep active in those idle connections.
 * It also requires unnecessary utilization of disk space.
 * 
 * DBAs are responsible for identifying unused temp tables and then inform to developers so that they can drop their unused temp tables.
 */

SELECT
	n.nspname as SchemaName
	,c.relname as RelationName
	,CASE c.relkind
	WHEN 'r' THEN 'table'
	WHEN 'v' THEN 'view'
	WHEN 'i' THEN 'index'
	WHEN 'S' THEN 'sequence'
	WHEN 's' THEN 'special'
	END as RelationType
	,pg_catalog.pg_get_userbyid(c.relowner) as RelationOwner               
	,pg_size_pretty(pg_relation_size(n.nspname ||'.'|| c.relname)) as RelationSize
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n               
                ON n.oid = c.relnamespace
WHERE  c.relkind IN ('r','s') 
AND  (n.nspname !~ '^pg_toast' and nspname like 'pg_temp%')
ORDER BY pg_relation_size(n.nspname ||'.'|| c.relname) DESC

Monitor Dead Tupple

/*
 * Under the covers Postgres is essentially a giant append only log. 
 * When you write data it appends to the log, when you update data it marks the old record as invalid and writes a new one, 
 * when you delete data it just marks it invalid. Later Postgres comes through and vacuums those dead records (also known as tuples).
 * All those unvacuumed dead tuples are what is known as bloat. Bloat can slow down other writes and create other issues. 
 * Paying attention to your bloat and when it is getting out of hand can be key for tuning vacuum on your database.
 */

WITH constants AS (
  SELECT current_setting('block_size')::numeric AS bs, 23 AS hdr, 4 AS ma
), bloat_info AS (
  SELECT
    ma,bs,schemaname,tablename,
    (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,
    (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2
  FROM (
    SELECT
      schemaname, tablename, hdr, ma, bs,
      SUM((1-null_frac)*avg_width) AS datawidth,
      MAX(null_frac) AS maxfracsum,
      hdr+(
        SELECT 1+count(*)/8
        FROM pg_stats s2
        WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename
      ) AS nullhdr
    FROM pg_stats s, constants
    GROUP BY 1,2,3,4,5
  ) AS foo
), table_bloat AS (
  SELECT
    schemaname, tablename, cc.relpages, bs,
    CEIL((cc.reltuples*((datahdr+ma-
      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta
  FROM bloat_info
  JOIN pg_class cc ON cc.relname = bloat_info.tablename
  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = bloat_info.schemaname AND nn.nspname <> 'information_schema'
), index_bloat AS (
  SELECT
    schemaname, tablename, bs,
    COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,
    COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols
  FROM bloat_info
  JOIN pg_class cc ON cc.relname = bloat_info.tablename
  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = bloat_info.schemaname AND nn.nspname <> 'information_schema'
  JOIN pg_index i ON indrelid = cc.oid
  JOIN pg_class c2 ON c2.oid = i.indexrelid
)
SELECT
  type, schemaname, object_name, bloat, pg_size_pretty(raw_waste) as waste
FROM
(SELECT
  'table' as type,
  schemaname,
  tablename as object_name,
  ROUND(CASE WHEN otta=0 THEN 0.0 ELSE table_bloat.relpages/otta::numeric END,1) AS bloat,
  CASE WHEN relpages < otta THEN '0' ELSE (bs*(table_bloat.relpages-otta)::bigint)::bigint END AS raw_waste
FROM
  table_bloat
    UNION
SELECT
  'index' as type,
  schemaname,
  tablename || '::' || iname as object_name,
  ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric END,1) AS bloat,
  CASE WHEN ipages < iotta THEN '0' ELSE (bs*(ipages-iotta))::bigint END AS raw_waste
FROM
  index_bloat) bloat_summary
ORDER BY raw_waste DESC, bloat DESC

Monitor Cache Hit Ratio

/*
 * monitor your cache hit ratio
 * URL : https://www.citusdata.com/blog/2019/03/29/health-checks-for-your-postgres-database/
*/
SELECT 
  sum(heap_blks_read) as heap_read,
  sum(heap_blks_hit)  as heap_hit,
  sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio
FROM 
  pg_statio_user_tables;

Monitoring Database Query Performance

/*
 * monitoring your database query performance. 
 * It records a lot of valuable stats about which queries are run, how fast they return, how many times their run, etc. 
 * Checking in on this set of queries regularly can tell you where is best to add indexes 
 * or optimize your application so your query calls may not be so excessive.
 * URL : https://www.citusdata.com/blog/2019/03/29/health-checks-for-your-postgres-database/
 * 
 * Jalankan : create extension pg_stat_statements; jika belum pernah dilakukan
 */
SELECT query, 
       calls,
       ceil(total_time) as total_time, 
       ceil(total_time / calls) as time_per_call, 
       ceil(stddev_time) as stddev_time, 
       rows, 
       rows / calls as rows_per_call,
       ceil(100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0)) AS hit_percent
FROM pg_stat_statements
WHERE query not similar to '%pg_%'
--WHERE query similar to '%omzetline%'
--and calls > 500
--ORDER BY calls desc
--ORDER BY total_time desc
order by time_per_call desc
--ORDER BY rows_per
LIMIT 100;

Locks held by Open Transactions (pg_locks)

/*
 * Script to find information about the Locks held by Open Transactions (pg_locks)
 * 
 * PostgreSQL view pg_locks contains the main information about the all types of lock of Database Server.
 * It contains one row per active lockable object.
 * This script is very helpful to all PostgreSQL Database Administrators.
*/

SELECT 
	pl.pid AS ProcessID
	,psa.datname AS DatabaseName
	,psa.usename AS UserName
	,psa.application_name AS ApplicationName
	,ps.relname AS ObjectName
	,psa.query_start AS QueryStartTime
	,psa.state AS QueryState
	,psa.query AS SQLQuery
	,pl.locktype
	,pl.tuple AS TupleNumber
	,pl.mode AS LockMode
	,pl.granted -- True if lock is held, false if lock is awaited
FROM pg_locks AS pl 
LEFT JOIN pg_stat_activity AS psa
	ON pl.pid = psa.pid
LEFT JOIN pg_class AS ps
	ON pl.relation = ps.oid

Check Trigger Status

SELECT * FROM pg_trigger
WHERE tgisinternal=false and tgenabled = 'D'

--tgrelid 'public.c_order'::regclass and

-- enable trigger 
-- ALTER TABLE m_product ENABLE TRIGGER m_product_trg;

-- Disable trigger 
-- ALTER TABLE m_product DISABLE TRIGGER m_product_trg; 

More advance

-- Check Disabled triggers

select trigger_name as referencekey_id,
       null as record_id,
       0 as ad_role_id,
       null as ad_user_id,
       'Constraint '||trigger_name||' on table '||table_name||' is not enabled' as description,
       'Y' as isActive,
        0 as ad_org_id, 
        0 as ad_client_id, 
        now() as created,  
        0 as createdBy,  
        now() as updated,
        0 as updatedBy
from user_triggers
where status!='ENABLED'

-- enable trigger jika di perlukan

--ALTER TABLE c_orderline ENABLE TRIGGER oez_order_line_addcost;
--ALTER TABLE m_product ENABLE TRIGGER m_product_trg;
--ALTER TABLE c_conversion_rate ENABLE TRIGGER c_conversion_rate_trg;

Monitor LIVE and DEAD Tupples

SELECT 
	relname AS ObjectName
	,pg_stat_get_live_tuples(c.oid) AS LiveTuples
	,pg_stat_get_dead_tuples(c.oid) AS DeadTuples
FROM pg_class c
--where relname like 'c_%'
order by pg_stat_get_dead_tuples(c.oid) desc;
--order by pg_stat_get_live_tuples(c.oid) desc;

Monitor LIVE and DEAD Tupples with AUTO VACUUM

SELECT 
    schemaname
    ,relname
    ,n_live_tup
    ,n_dead_tup
    ,last_autovacuum
FROM pg_stat_all_tables
ORDER BY n_dead_tup
    /(n_live_tup
      * current_setting('autovacuum_vacuum_scale_factor')::float8
      + current_setting('autovacuum_vacuum_threshold')::float8)
     DESC

How to

Terminate Idle Transactions for Postgres version 12 and previous

Background:
since configuration idle_in_transaction_session_timeout on available at version 13 and above, here is the workaround solutions

In PostgreSQL version 10, you can use a SQL script to identify and terminate idle transactions based on your requirements.

Create below function:

CREATE OR REPLACE FUNCTION oez_terminate_idle_sessions(idle_timeout_minutes integer) RETURNS void AS $$
BEGIN
    -- Terminate idle sessions that exceed the specified timeout in minutes
    PERFORM pg_terminate_backend(pg_stat_activity.pid)
    FROM pg_stat_activity
    WHERE state = 'idle in transaction' AND now() - pg_stat_activity.query_start > interval '1 minute' * idle_timeout_minutes;
END;
$$ LANGUAGE plpgsql;

To use this function, you can call it with your desired idle timeout value. For example, to terminate sessions idle for more than 5 minutes, you can call the function like this:

SELECT oez_terminate_idle_sessions(5); -- Terminate idle sessions idle for more than 5 minutes

Convert Integer or numeric to row number

select 
    	generate_series(1,tot_wo) as row
from oez_gen_simple_wo sw