Forge Home

puppet_status_check

A Puppet Module to Promote Preventative Maintenance and Self Service

96 downloads

48 latest version

3.1 quality score

We run a couple of automated
scans to help you access a
module's quality. Each module is
given a score based on how well
the author has formatted their
code and documentation and
modules are also checked for
malware using VirusTotal.

Please note, the information below
is for guidance only and neither of
these methods should be considered
an endorsement by Puppet.

Version information

  • 0.9.1 (latest)
  • 0.9.0
released May 10th 2024
This version is compatible with:
  • Puppet Enterprise 2023.6.x, 2023.5.x, 2023.4.x, 2023.3.x, 2023.2.x, 2023.1.x, 2023.0.x, 2021.7.x
  • Puppet >= 7.18.0 < 9.0.0
  • , , , , , , , , ,
Plans:
  • summary

Start using this module

  • r10k or Code Manager
  • Bolt
  • Manual installation
  • Direct download

Add this module to your Puppetfile:

mod 'puppetlabs-puppet_status_check', '0.9.1'
Learn more about managing modules with a Puppetfile

Add this module to your Bolt project:

bolt module add puppetlabs-puppet_status_check
Learn more about using this module with an existing project

Manually install this module globally with Puppet module tool:

puppet module install puppetlabs-puppet_status_check --version 0.9.1

Direct download is not typically how you would use a Puppet module to manage your infrastructure, but you may want to download the module in order to inspect the code.

Download

Documentation

puppetlabs/puppet_status_check — version 0.9.1 May 10th 2024

Puppet Status Check

Description

Puppet Status Check provides a way to alert the end-user when Puppet is not in an ideal state. It uses pre-set indicators and has a simplified output that directs the end-user to the next steps for resolution.

Setup

What this module affects

This module primarily provides status indicators the fact named puppet_status_check. Once nodes have been classified with the module, facts will be generated and the optional indicators can occur. By default, fact collection is set to only check the status of the Puppet agent. Puppet infrastructure checks require additional configuration.

Setup requirements

Install the module, plug-in sync will be used to deliver the required facts for this module, to each agent node in the environment the module is installed in.

Usage

Classify nodes with puppet_status_check. Notify resources will be added to a node on each Puppet run if indicator's are reporting as false. These can be viewed in the Puppet report for each node, or queried from Puppetdb.

Enable infrastructure checks

The default fact population will not perform checks related to puppet infrastructure services such as the puppetserver, puppetdb, or postgresql. To enable the checks for Puppet servers, set the following parameter to those infrastructure node(s):

puppet_status_check::role: primary

Optionally define the path to pg_config if it is not in the standard path.

puppet_status_check::pg_config_path: /usr/pgsql-16/bin/pg_config

Disable

To completely disable the collection of puppet_status_check facts, uninstall the module or classify the module with the enabled parameter:

puppet_status_check::enabled: false

Reporting Options

Class declaration

To enable fact collection and configure notifications, classify nodes with the puppet_status_check class. Examples using site.pp:

  1. Check basic agent status:
    node 'node.example.com' {
      include 'puppet_status_check'
    }
    
  2. Check puppet server infrastructure status:
    node 'node.example.com' {
      class { 'puppet_status_check':
        role => 'primary',
      }
    }
    
  3. For maximum coverage, report on all default indicators. However, if you need to make exceptions for your environment, classify the array parameter indicator_exclusions with a list of all the indicators you do not want to report on.
    class { 'puppet_status_check':
      indicator_exclusions => ['S0001','S0003','S0003','S0004'],
    }
    

Using a Puppetdb query to report status.

As the module uses Puppet's existing fact behavior to gather the status data from each of the agents, it is possible to use PQL (puppet query language) to gather this information.

Consult with your local Puppet administrator to construct a query suited to your organizational needs. Please find some examples of using the puppetdb_cli gem to query the status check facts below:

  1. To find the complete output from all nodes listed by certname (this could be a large query based on the number of agent nodes, further filtering is advised ):
    puppet query 'facts[certname,value] { name = "puppet_status_check" }'
    
  2. To find the complete output from all nodes listed by certname with the primary role:
    puppet query 'facts[certname,value] { name = "puppet_status_check" and certname in facts[certname] { name = "puppet_status_check_role" and value = "primary" } }'
    
  3. To find those nodes with a specific status check set to false:
    puppet query 'inventory[certname] { facts.puppet_status_check.S0001 = false }'
    

Ad-hoc Report (Plans)

The plan puppet_status_check::summary summarizes the status of each of the checks on target nodes that have the puppet_status_check fact enabled. Sample output can be seen below:

TBC

Setup Requirements

Hiera is utilized to lookup test definitions, this requires placing a static hierarchy in your environment level hiera.yaml.

plan_hierarchy:
  - name: "Static data"
    path: "static.yaml"
    data_hash: yaml_data

Refer to the bolt hiera documentation for further explanation.

Using Static Hiera data to populate indicator_exclusions when executing plans

Place the plan_hierarchy listed in the step above, in the environment layer.

Create a [static.yaml] file in the environment layer hiera data directory

puppet_status_check::indicator_exclusions:                                             
  - '<TEST ID>'                                                                

Indicator ID's within array will be excluded when running the puppet_status_check::summary plan.

Running the plans

The puppet_status_check::summary plans can be run from the Puppet Bolt. More information on parameters of the plan can be viewed in [REFERENCE.md].

  1. Example call from the command line to run puppet_status_check::summary against all infrastructure nodes:
    bolt plan run puppet_status_check::summary role=primary
    
  2. Example call from the command line to run puppet_status_check::summary against all regular agent nodes:
    bolt plan run puppet_status_check:summary role=agent
    
  3. Example call from the command line to run against a set of infrastructure nodes:
    bolt plan run puppet_status_check::summary targets=server-70aefa-0.region-a.domain.com,psql-70aefa-0.region-a.domain.com
    
  4. Example call from the command line to exclude indicators for puppet_status_check::infra_summary:
    bolt plan run puppet_status_check::summary -p '{"indicator_exclusions": ["S0001","S0021"]}'
    
  5. Example call from the command line to exclude indicators for puppet_status_check::agent_summary:
    bolt plan run puppet_status_check::summary -p '{"indicator_exclusions": ["AS001"]}'
    

Reference

Facts

puppet_status_check_role

This fact is used to determine which status checks are included on an infrastructure node. Classify the puppet_status_check module with a role parameter to change the role.

Role Description
primary The node hosts a puppetserver, puppetdb, database, and certificate authority
compiler The node hosts a puppetserver and puppetdb
postgres The node hosts a database
agent The node runs a puppet agent service

The role is agent by default.

puppet_status_check

This fact is confined to run on infrastructure nodes only.

Refer to the table below for next steps when any indicator reports a false.

As this module was derived from the Puppet Enterprise status check module, links within the Self-service steps below may reference Puppet Enterprise specific solutions. The goal over time is to eventually update these to include Open Source Puppet as well.

Indicator ID Description Self-service steps
S0001 The puppet service is running on agents See documentation
S0003 Infrastructure components are running in noop Do not routinely configure noop on infrastructure nodes, as it prevents the management of key infrastructure settings. Disable this setting on infrastructure components.
S0004 Puppet Server status endpoint is returning any errors Execute puppet infrastructure status. Which ever service returns in a state that is not running, examine the logging for that service to indicate the fault.
S0005 Certificate authority (CA) cert expires in the next 90 days Install the puppetlabs-ca_extend module and follow steps to extend the CA cert.
S0006 Puppet metrics collector is enabled and collecting metrics. Metrics collector is a tool that lets you monitor a installation. If it is not enabled, enable it.
S0007 There is at least 20% disk free on the PostgreSQL data partition Determines if growth is slow and expected within the TTL of your data. If there's an unexpected increase, use this article to troubleshoot PuppetDB
S0008 There is at least 20% disk free on the codedir data partition This can indicate you are deploying more code from the code repo than there is space for on the infrastructure components, or that something else is consuming space on this partition. Run puppet config print codedir, check that codedir partition indicated has enough capacity for the code being deployed, and check that no other outside files are consuming this data mount.
S0009 Puppetserver service is running and enabled Checks that the service can be started and enabled by running puppet resource service pe-puppetserver ensure=running, examines /var/log/puppetlabs/puppetserver/puppetserver.log for failures.
S0010 Puppetdb service is running and enabled Checks that the service can be started and enabled by running puppet resource service pe-pupeptdb ensure=running, examines /var/log/puppetlabs/puppetdb/puppetdb.log for failures.
S0011 Postgres service is running and enabled Checks that the service can be started and enabled by running puppet resource service pe-postgres ensure=running, examines /var/log/puppetlabs/postgresql/<postgresversion>/postgresql-<today>.log for failures.
S0012 Puppet produced a report during the last run interval Troubleshoot Puppet run failures.
S0013 The catalog was successfully applied during the last Puppet run Troubleshoot Puppet run failures.
S0014 Anything in the command queue is older than a Puppet run interval This can indicate that the PuppetDB performance is inadequate for incoming requests. Review PuppetDB performance. Use metrics to pinpoint the issue.
S0015 The agent host certificate is expiring in the next 90 days Puppet Enterprise has built in functionalilty to regenerate infrastructure certificates, see the following documentation
S0016 There are no OutOfMemory errors in the Puppetserver log Increase the Java heap size for that service.
S0017 There are no OutOfMemory errors in the Puppetdb log Increase the Java heap size for that service.
S0019 There sufficient jRubies available to serve agents Insufficient jRuby availability results in queued puppet agents and overall poor system performance. There can be many causes: Insufficient server tuning for load, a thundering herd, and insufficient system resources for scale.
S0021 There is at least 10% free system memory Ensure your system hardware availability matches the recommended configuration, note this assumes no third-party software using significant resources, adapt requirements accordingly for third-party requirements. Examine metrics from the server and determine if the memory issue is persistent
S0023 Certificate authority CRL does not expire within the next 90 days The solution is to reissue a new CRL from the Puppet CA, note this will also remove any revoked certificates. To do this follow the instructions in this module
S0024 Files in the puppetdb discard directory more than 1 week old Recent files indicate PuppetDB may have in issue processing incoming data. See this article for more information.
S0025 The host copy of the CRL does not expire in the next 90 days If S0023 on the primary role is also false, use the resolution steps in S0023. If S0023 on the primary is true, follow this article
S0026 The puppetserver JVM Heap-Memory is set to an efficient value Due to an oddity in how JVM memory is utilized, most Java applications are unable to consume heap memory between ~31GB and ~48GB. If the heap memory set within this value, reduce it to efficiently allocate server resources. See this article for more information.
S0027 The puppetdb JVM Heap-Memory is set to an efficient value Due to an oddity in how JVM memory is utilized, most Java applications are unable to consume heap memory between ~31GB and ~48GB. If the heap memory set within this value, reduce it to efficiently allocate server resources. See this article for more information.
S0029 Postgresql connections are less than 90% of the configured maximum First determine the need to increase connections, evaluate if this message appears on every puppet run, or if idle connections from recent component restarts may be to blame. If persistent, impact is minimal unless you need to add more components such as Compilers or Replicas, if you plan to increase the number of components on your system, increase the max_connections value. Consider also increasing shared_buffers if that is the case as each connection consumes RAM.
S0030 Puppet is configured with use_cached_catalog set to true It is recommended to not enable use_cached_catalog. Enabling prevents the enforcement of key infrastructure settings. See our documentation for more information
S0033 Hiera version 5 is in use Upgrading to Hiera 5 offers major advantages
S0034 Puppetserver been upgraded within the last year Upgrade your instance.
S0035 puppet module list is not returning any warnings Run puppet module list --debug and resolve the issues shown. The Puppetfile does NOT include Forge module dependency resolution. Ensure that every module needed for all of the specified modules to run is included. Refer to managing environment content with a Puppetfile and refer individual modules on the Puppet forge for dependency information.
S0036 Puppetserver configured max-queued-requests is less than 151 The maximum value for jruby_puppet_max_queued_requests is 150
S0038 Number of environments under $codedir/environments is less than 100 Having a large number of code environments can negatively affect Puppet Server performance. See Configuring Puppet Server documentation for more information. Remove any environments that are not required. If all are required you can ignore this warning.
S0039 Puppetserver has not reached the configured queue-limit-hit-rate See the max-queued-requests article for more information.
S0045 Puppetserver is configured with a reasonable number of JRubies Having too many can reduce the amount of heap space available to puppetserver and cause excessive garbage collections, reducing performance. While it is possible to increase the heap along with the number of JRubies, we have observed diminishing returns with more than 12 JRubies. Therefore an upper limit of 12 is recommended with between 1 and 2gb of heap memory allocated for each.
AS001 The agent host certificate is not expiring in the next 90 days Use a puppet query to find expiring host certificates. puppet query 'inventory[certname] { facts.puppet_status_check.AS001 = false }'
AS003 If set, the certname is not in the wrong section of puppet.conf The certname should only be placed in the [main] section to prevent unforeseen issues with the puppet agent. Refer to the documentation on configuring the certname.
AS004 The hosts copy of the CRL does not expire in the next 90 days Use the resolution steps in S0023. If S0023 on the primary role is true, follow this article

How to report an issue or contribute to the module

If you have a reproducible bug, you can open an issue directly in the GitHub issues page of the module. We also welcome PR contributions to improve the module. Please see further details about contributing.