library
turbot/gcp_thrifty
- Detect & correct AlloyDB clusters exceeding max age
- Detect & correct long-running AlloyDB instances exceeding max age
- Detect & correct Compute addresses if unattached
- Detect & correct Compute disks attached to stopped instances
- Detect & correct Compute disks exceeding max size
- Detect & correct Compute disks if unattached
- Detect & correct Compute disks with low usage
- Detect & correct Compute engine instances exceeding max age
- Detect & correct Compute engine instances large
- Detect & correct Compute instances with low utilization
- Detect & correct Compute node groups without autoscaling
- Detect & correct Compute snapshots exceeding max age
- Detect & correct Dataproc clusters without autoscaling
- Detect & correct Kubernetes clusters exceeding max age
- Detect & correct GKE clusters without vertical pod autoscaling
- Detect & correct Logging Buckets with high retention period
- Detect & correct Redis instances exceeding max age
- Detect & correct SQL database instances exceeding max age
- Detect & correct SQL DB instances with low connection count
- Detect & correct SQL DB instances with low cpu utilization
- Detect & correct Storage buckets without lifecycle policies
- Detect & correct VPN gateways with no tunnels
Get Involved
Version
Detect & correct compute disks with low usage
Overview
Compute disks with low usage may be indicative that they're no longer required, these should be reviewed.
This query trigger detects compute disks with low average usage and then either sends a notification or attempts to perform a predefined corrective action.
Getting Started
By default, this trigger is disabled, however it can be configured by setting the below variables
compute_disks_with_low_usage_trigger_enabled
should be set totrue
as the default isfalse
.compute_disks_with_low_usage_trigger_schedule
should be set to your desired running schedulecompute_disks_with_low_usage_default_action
should be set to your desired action (i.e."notify"
for notifications or"delete_disk"
to delete the disk or"snapshot_and_delete_compute_disk"
to snapshot and delete the disk).
Then starting the server:
flowpipe server
or if you've set the variables in a .fpvars
file:
flowpipe server --var-file=/path/to/your.fpvars
Query
with disk_usage as ( select project, location as zone, name as disk_name, _ctx, round(avg(max)) as avg_max, count(max) as days from ( select project, name, location, _ctx, cast(maximum as numeric) as max from gcp_compute_disk_metric_read_ops_daily where date_part('day', now() - timestamp) <= 30 union all select project, name, location, _ctx, cast(maximum as numeric) as max from gcp_compute_disk_metric_write_ops_daily where date_part('day', now() - timestamp) <= 30 ) as read_and_write_ops group by name, project, _ctx, location)select concat(disk_name, ' [', zone, '/', project, ']') as title, disk_name, project, zone, _ctx ->> 'connection_name' as credfrom disk_usagewhere avg_max < 100;
Schedule
15m