
Puppet's Frequently Asked Questions page is a central hub where its customers can always go to with their most common questions. These are the 288 most popular questions Puppet receives.
Follow these instructions to migrate your deployment from a split or monolithic installation of Puppet Enterprise 2016.4.x and later to a monolithic installation of 2018.1 to 2018.1.7 or 2019.0.0 to 2019.0.2.
Find steps to migrate in our docs for these versions:
PE 2018.1.8 and later versions of 2018.1.x
PE 2019.0.3 and all later versions of 2019.0.x
PE 2019.1.x
To migrate your PE deployment, you'll prepare and move certs, database, classifier, configuration files, Puppet code, and Hiera data while keeping your current infrastructure up and running.
Note:
This article assumes that you're using the instance of PostgreSQL provided by PE and a default Hiera configuration with hiera.yaml in the default location, /etc/puppetlabs/puppet/hiera.yaml.
Version and installation information
PE version:
Migrate to: 2018.1.0 to 2018.1.7 or 2019.0.0 to 2019.0.2
Solution
Step One: Prepare and back up your cert information
Complete these steps on your current master:
If your new master is (or was) an agent of the current master, clean the new master's cert to prevent SSL cert issues:
If your master is running PE 2018.1.x or earlier
Runpuppet cert clean <NEW MASTER CERTNAME>
If your master is running PE 2019.0 or later
Run puppetserver ca clean --certname <NEW MASTER CERTNAME> ; find /etc/puppetlabs/puppet/ssl -name <NEW MASTER CERTNAME>.pem -delete
Set a directory for your backups: export BACKUP_DIR=/tmp/backup
Create the directory: mkdir $BACKUP_DIR
Back up the SSL directory: tar -zcvf $BACKUP_DIR/puppet_ssl.tar.gz /etc/puppetlabs/puppet/ssl/
Transfer the resulting tarball to the /tmp/backup directory on the new master using your preferred method.
Step Two: Back up the databases
On the PuppetDB node, back up the databases:
If your current installation is monolithic, the PuppetDB node is the same node as the master, so you've already created the backup directories and you can skip this step. If your current installation is split, set and create the backup directory on the PuppetDB node:
export BACKUP_DIR=/tmp/backup
mkdir $BACKUP_DIR
Set the pe-postgres user as the owner of the backup directory: chown pe-postgres:pe-postgres $BACKUP_DIR
Navigate to the backup directory: cd $BACKUP_DIR
Back up the databases:
for db in pe-activity pe-classifier pe-orchestrator pe-puppetdb pe-rbac; do echo "Backing up $db" ; sudo -u pe-postgres /opt/puppetlabs/server/bin/pg_dump -Fc $db -f $db.backup.bin; done
Transfer the resulting files to the /tmp/backup directory on the new master using your preferred method.
Step Three: Restore the SSL directory
Restore the SSL directory on the new master:
Set /tmp/backup as the backup directory: export BACKUP_DIR=/tmp/backup
Create the directory: mkdir -p /etc/puppetlabs/puppet
Restore the SSL directory: tar -zxvf $BACKUP_DIR/puppet_ssl.tar.gz -C /
Advance the next certificate serial number by 1000 so certificates can be issued on both installations without overlap:
current_serial_num=$(cat /etc/puppetlabs/puppet/ssl/ca/serial)
printf '\%04X' $(( 16#${current_serial_num} + 1000 )) > /etc/puppetlabs/puppet/ssl/ca/serial
Step Four: Install PE
On the new master, install PE.
PE 2018.1.0 and later no longer installs MCollective by default. If your infrastructure relies on MCollective, you must enable it during the installation of the new master. Future releases of PE will not include MCollective. To prepare, migrate your MCollective work to Puppet orchestrator to automate tasks and create consistent, repeatable administrative processes.
Step Five: Restore the databases and classifier data
On the new master, restore the databases, fix database permissions, and recreate database extensions:
Set /tmp/backup as the backup directory: export BACKUP_DIR=/tmp/backup
Navigate to the backup directory: cd $BACKUP_DIR
Stop all services except for pe-postgresql:
for svc in puppet pe-puppetserver pe-puppetdb pe-console-services pe-nginx pe-activemq pe-orchestration-services pxp-agent; do echo "Stopping $svc" ; puppet resource service $svc ensure=stopped; done
Restore the databases from the backup directory:
for db in pe-activity pe-classifier pe-orchestrator pe-puppetdb pe-rbac; do echo "Restoring $db" ; sudo -u pe-postgres /opt/puppetlabs/server/bin/pg_restore -Cc $BACKUP_DIR/$db.backup.bin -d template1; done
Fix database permissions and recreate database extensions. Run
/opt/puppetlabs/bin/puppet-infrastructure configure
Step Six: (Optional) Deactivate and clear certificates on your old infrastructure nodes
This step is not required for the migration, but completing it deactivates infrastructure nodes in PuppetDB, deletes the old master's node information cache, frees up licenses, and allows you to reuse hostnames on new nodes.
Warning: If your old master and the new master have the same certificate name, do not complete this step; it will delete your new master.
On the new master, run the following command:
If you are migrating to 2019.0.x:
puppet node purge <OLD MASTER CERTNAME> ; find /etc/puppetlabs/puppet/ssl -name <OLD MASTER CERTNAME>.pem -delete
If you are migrating to 2018.1.x:
puppet node purge <OLD MASTER CERTNAME>; puppet cert clean <OLD MASTER CERTNAME>
If your old deployment is split, repeat step one on the new master replacing the cert names with the PuppetDB and console cert names.
Step Seven: Migrate configuration files, Puppet code, and Hiera data
Your deployment determines the specifics of how to migrate your configuration files, Puppet code, and Hiera data. Common tasks include:
Edit puppet.conf to add customizations from your old deployment on the new master.
If you use Code Manager or r10k, configure Code Manager or r10k to deploy code on the new master.
If you don't use Code Manager or r10k, copy the contents of the code directory /etc/puppetlabs/code/ to the new master.
Move your Hiera data and copy your old hiera.yaml file to /etc/puppetlabs/puppet/hiera.yaml on the new master.
If you are migrating from PE 2016.4, your existing Hiera 3hiera.yamlfile will work. However upgrading to Hiera 5 improves performance and will make future upgrades easier. For more information on migrating from Hiera 3 to Hiera 5, please see our documentation.
Copy classification customizations to the new installation.
Step Eight: Configure your agents and regenerate compile master certs
Configure your new agents and compile masters.
Point the agents at the new master. On each agent, update puppet.conf: puppet config set server <NEW MASTER FQDN>
Regenerate certs for all compile masters using our documentation for PE 2018.1 or PE 2019.0,making sure to include --allow-dns-alt-names when signing the compile master's certificate request.
If you migrated to a later version of PE, upgrade your compile masters to the same version as your new master. SSH into each compile master and run:
/opt/puppetlabs/puppet/bin/curl --cacert /etc/puppetlabs/puppet/ssl/certs/ca.pem https://<MASTER FQDN>:8140/packages/current/upgrade.bash | sudo bash
If you migrated to a newer version of PE, upgrade the agent nodes.
View ArticleAfter upgrading to Puppet Enterprise 2016.2.0 or running puppet enterprise configure on a PE 2016.2.0 installation, you might get a "classification conflict" error when an agent is classified in more than one environment node group.
Upgrading to PE 2016.2.0 or running puppet enterprise configure causes a rule to be restored that matches all nodes to the Production environment node group. The restoration of this rule creates a classification conflict with environment groups that do not inherit from the Production environment node group.
Note: If all of your environment node groups inherit from the Production environment node group, this article does not apply.
Error messages
Error: Could not retrieve catalog from remote server: Error 400 on
SERVER: Failed when searching for node agentname.domain.com:
Classification of agentname.domain.com failed due to a classification
conflict: The node was classified into groups named "<custom
environment group>", "PE Agent", "PE MCollective", and "Production
environment" that defined conflicting values for the environment.
Version and installation information
PE version: 2016.2.0
This issue was resolved in PE 2016.2.1.
OS: Any agent
Solution
Resolve the error by removing the rule adding all nodes to the Production environment node group.
Log into the console.
Navigate to Nodes > Classification. Select the Production environment node group.
In the Rules tab, find the rule that has the following values:
Fact
Operator
Value
name
~
.*
Click Remove to the right of the rule. Commit the change by clicking the Commit 1 change button.
View ArticleYour security is important to us. Here are our recommended resources for staying up to date on security and bug fixes for Puppet products.
Stay informed about security for Puppet products:
Sign up for the Puppet Security Announce List
Read Security and vulnerability announcements
Learn about our disclosure and submission process
Learn more about bug fixes:
Release notes for the latest version of Puppet Enterprise
Release notes for Continuous Delivery for Puppet Enterprise
Release notes for Discovery
Release notes for Remediate
View ArticleUse the table below to find out how to get help resolving issues with your Puppet Enterprise license, including issues purchasing, installing, and splitting your license.
You can manage licenses for other Puppet products, including Continuous Delivery for Puppet Enterprise, Discovery, or Remediate at https://licenses.puppet.com/.
Version and installation information
PE version: All supported versions
Solution
Issue
How to get help
I need more information about purchasing a license key.
Read the documentation for your version of Puppet Enterprise.
I purchased a license key and I haven’t received it.
Contact your account manager.
I need instructions to install my license key.
Read the documentation for your version of Puppet Enterprise.
I need help or answers to questions about installing my key.
Open a support ticket.
I need to split my license into multiple smaller licenses.
Contact your account manager.
I need to buy more licenses.
Get in contact with Sales.
View ArticleAfter five great years of helping engineers automate continuous integration workflows, first as Distelli and then as Puppet, we have decided to discontinue Puppet Pipelines. Moving forward, we will focus our automation energy on tooling that tackles the emerging set of cloud-native continuous delivery challenges. We hope you join us on the next leg of the cloud delivery automation journey. In the meantime, here are a few important things to note.
What does this mean for my Puppet Pipelines SaaS account?
Puppet will be decommissioning the Puppet Pipelines product on 31 January 2020. This change affects Puppet Pipelines, which includes Pipelines for Applications and Pipelines for Containers. The SaaS service will no longer be available after 31 January 2020.
Paying customers' billing will be canceled on or before the 31 January 2020 EOL date. Customers who are billed annually and whose next renewal date is beyond 31 January 2020 will receive a prorated refund on or before the EOL date.
If you need documentation after that date, we've archived Pipelines documentation and articles.
Are there other tools I can use to replace Pipelines?
There are a number of tools that have gained popularity in the continuous integration (CI) and continuous deployment (CD) space. Recently, the Linux Foundation announced the formation of the Continuous Delivery Foundation to provide a space for the industry to standardize and collaborate on CI and CD projects. The projects that are sponsored by the Continuous Delivery Foundation, including Jenkins, JenkinsX, and Spinnaker; are good candidates to evaluate migrating your CI/CD workloads to from Pipelines.
For users who would prefer a hosted solution, we recommend evaluating Travis CI, CloudBees, and Octopus Deploy. And finally, if you're a cloud user, we recommend you evaluate your cloud providers' CI service.
How can I learn more about Puppet's cloud-native projects and continuous delivery offering?
If you or your customers are interested in early access to our cloud-native continuous delivery offering, please check out the Project Nebula (public beta).
Task execution: Download Bolt.
Bolt is an open source, agentless multi-platform automation tool that reduces your time to automation. Bolt's task running and orchestration capabilities extend to all manner of targets and transports, making it easy to perform various tasks, from simple things like starting and stopping services to setting up Docker or Kubernetes.
Who can I contact with questions?
Please contact us at [email protected] with questions.
Thank you for your patronage and best regards.
View ArticleWhen I installed(or upgraded) Puppet Enterprise 2016.1.2, file sync crashed, and I received a LargeObjectException in puppetserver.log. My code is not deploying. When I log into the console, in the upper right corner, Puppet Services status shows Code Manager has an error condition. How do I resolve this?
Logs
From /var/log/puppetlabs/puppetserver/puppetserver.log:
org.eclipse.jgit.errors.LargeObjectException: 30fba8f56386d7d2015fc6401a084466187ab520 exceeds size limit
at org.eclipse.jgit.internal.storage.file.UnpackedObject$LargeObject.getCachedBytes(UnpackedObject.java:392) ~[puppet-server-release.jar:na]
at org.eclipse.jgit.treewalk.CanonicalTreeParser.reset(CanonicalTreeParser.java:202) ~[puppet-server-release.jar:na]
at org.eclipse.jgit.treewalk.CanonicalTreeParser.createSubtreeIterator0(CanonicalTreeParser.java:236) ~[puppet-server-release.jar:na]
at org.eclipse.jgit.treewalk.CanonicalTreeParser.createSubtreeIterator(CanonicalTreeParser.java:214) ~[puppet-server-release.jar:na]
at org.eclipse.jgit.treewalk.CanonicalTreeParser.createSubtreeIterator(CanonicalTreeParser.java:60) ~[puppet-server-release.jar:na]
at org.eclipse.jgit.treewalk.TreeWalk.enterSubtree(TreeWalk.java:924) ~[puppet-server-release.jar:na]
In PE 2016.1.2, file sync might crash if you've deployed either of the following:
100 - 300 modules
125 - 250 environments
Note: If your modules and environments have long names, a crash might occur at 100 modules or 125 environments deployed. If your modules and environments have short names, a crash might occur at 300 modules or 250 environments deployed.
The stream-file-threshold, which sets the size of streaming files allowed by JGit (a Java Git implementation used by PE), is too low for code deployment to be successful.
Version and installation information
PE version: 2016.1.2
OS: Any
Installation type: Any
Solution
Resolve the issue by setting the stream-file-threshold to 512, thus increasing the size of streaming files allowed by JGit.
Log into the master (or master of masters) as root.
Edit /etc/puppetlabs/puppetserver/conf.d/file-sync.conf to add the following to the client section: stream-file-threshold : 512
For example:
file-sync: {
data-dir: "/opt/puppetlabs/server/data/puppetserver/filesync"
client: {
poll-interval: 5
server-api-url: "https://puppet-master-ankeny:8140/file-sync/v1"
server-repo-url: "https://puppet-master-ankeny:8140/file-sync-git"
ssl-cert: "/etc/puppetlabs/puppet/ssl/certs/puppet-master-ankeny.pem"
ssl-key: "/etc/puppetlabs/puppet/ssl/private_keys/puppet-master-ankeny.pem"
ssl-ca-cert: "/etc/puppetlabs/puppet/ssl/certs/ca.pem"
enable-forceful-sync : true
stream-file-threshold : 512
}
Restart pe-puppetserver by running:
puppet resource service pe-puppetserver ensure=stopped
puppet resource service pe-puppetserver ensure=running
Repeat steps 1 through 3 on any compile masters that you have in your deployment.
View ArticleFind your operating system, version, and architecture and download your Puppet Enterprise 2019.3 master.
Ready to deploy? Get detailed instructions to install PE.
Ready to upgrade? Get detailed instructions to upgrade PE.
Need an older version of PE? View available downloads for previous versions.
Need to download the latest agent? Get it here.
Need to download the latest client tools package? Get it here.
To download the master:
1. SSH into the node where you want to install the master.
2. Use wget or cURL to download Puppet Enterprise, as appropriate to your environment.
On the command line, run
wget --content-disposition "<DOWNLOAD URL>"
or
curl-JLO "<DOWNLOAD URL>"
Make sure to use quotation marks, for example:
wget --content-disposition "https://pm.puppet.com/cgi-bin/download.cgi?dist=el&rel=8&arch=x86_64&ver=latest"
Download now
EL (RHEL, CentOS, Scientific Linux, Oracle Linux) ver. 8 (x86_64)
https://pm.puppet.com/cgi-bin/download.cgi?dist=el&rel=8&arch=x86_64&ver=latest
GPG signature
Download now
EL (RHEL, CentOS, Scientific Linux, Oracle Linux) ver. 7 (x86_64)
https://pm.puppet.com/cgi-bin/download.cgi?dist=el&rel=7&arch=x86_64&ver=latest
GPG signature
Download now
EL (RHEL, CentOS, Scientific Linux, Oracle Linux) ver. 6 (x86_64)
https://pm.puppet.com/cgi-bin/download.cgi?dist=el&rel=6&arch=x86_64&ver=latest
GPG signature
Download now
RHEL 7 with Federal Information Processing Standards (FIPS) ver. 7 (x86_64)
https://pm.puppet.com/cgi-bin/download.cgi?dist=redhatfips&rel=7&arch=x86_64&ver=latest
GPG signature
Download now
Ubuntu ver. 18.04 (amd64)
https://pm.puppet.com/cgi-bin/download.cgi?dist=ubuntu&rel=18.04&arch=amd64&ver=latest
GPG signature
Download now
Ubuntu ver. 16.04 (amd64)
https://pm.puppet.com/cgi-bin/download.cgi?dist=ubuntu&rel=16.04&arch=amd64&ver=latest
GPG signature
Download now
SLES ver. 12 (x86_64)
https://pm.puppet.com/cgi-bin/download.cgi?dist=sles&rel=12&arch=x86_64&ver=latest
GPG signature
View Article
I want to upgrade Puppet Enterprise, but I'm not sure which version to upgrade to and how to upgrade.
Solution:
A successful upgrade requires steps beyond running the upgrader. Please read the appropriate version of our upgrade documentation for critical information before you start.
We recommend that you upgrade to the latest version of Puppet Enterprise if you always want to take advantage of the latest new features and capabilities as soon as they become available. You can download the latest version PE from our knowledge base without filling out any forms.
Puppet Enterprise 2018.1 is our long-term supported release (LTS), meaning you can expect full support, security updates and bug fixes through May 2020. This version is right for you if you want continued security updates and full support without upgrading your implementation on a frequent basis. You can download PE 2018.1 from our Previous releases page.
View ArticleFind your operating system, version, and architecture of choice to download your Puppet Enterprise 2019.3 agent.
Ready to deploy? Get detailed instructions to install PE.
Ready to upgrade? Get detailed instructions to upgrade to the latest version.
Need an older version of PE? View available downloads for previous versions.
Need to download the latest master version? Get it here
Need to download the latest client tools package? Get it here.
Download now
EL (RHEL, CentOS, Scientific Linux, Oracle Linux) ver. 8 (x86_64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/el/8/puppet6/x86_64/puppet-agent-6.12.0-1.el8.x86_64.rpm
Download now
EL (RHEL, CentOS, Scientific Linux, Oracle Linux) ver. 7 (x86_64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/el/7/puppet6/x86_64/puppet-agent-6.12.0-1.el7.x86_64.rpm
Download now
EL (RHEL, CentOS, Scientific Linux, Oracle Linux) ver. 7 (ppc64le)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/el/7/puppet6/ppc64le/puppet-agent-6.12.0-1.el7.ppc64le.rpm
Download now
EL (RHEL, CentOS, Scientific Linux, Oracle Linux) ver. 7 (aarch64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/el/7/puppet6/aarch64/puppet-agent-6.12.0-1.el7.aarch64.rpm
Download now
EL (RHEL, CentOS, Scientific Linux, Oracle Linux) ver. 6 (x86_64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/el/6/puppet6/x86_64/puppet-agent-6.12.0-1.el6.x86_64.rpm
Download now
EL (RHEL, CentOS, Scientific Linux, Oracle Linux) ver. 6 (i386)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/el/6/puppet6/i386/puppet-agent-6.12.0-1.el6.i386.rpm
Download now
EL (RHEL, CentOS, Scientific Linux, Oracle Linux) ver. 5 (x86_64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/el/5/puppet6/x86_64/puppet-agent-6.12.0-1.el5.x86_64.rpm
Download now
EL (RHEL, CentOS, Scientific Linux, Oracle Linux) ver. 5 (i386)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/el/5/puppet6/i386/puppet-agent-6.12.0-1.el5.i386.rpm
Download now
RHEL 7 with Federal Information Processing Standards (FIPS) ver. 7 (x86_64)
Contact sales
Fedora ver. 30 (x86_64)
http://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/fedora/30/puppet6/x86_64/puppet-agent-6.12.0-1.fc30.x86_64.rpm
Download now
Fedora ver. 29 (x86_64)
http://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/fedora/29/puppet6/x86_64/puppet-agent-6.12.0-1.fc29.x86_64.rpm
Download now
Fedora ver. 28 (x86_64)
http://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/fedora/28/puppet6/x86_64/puppet-agent-6.12.0-1.fc28.x86_64.rpm
Download now
Microsoft Windows ver. Windows Server 2008(R2), Windows Server 2012(R2), Windows Server 2016, Windows Server 2019, Windows 7/8.1/10 (64-bit)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/windows/puppet-agent-6.12.0-x64.msi
Download now
Microsoft Windows ver. Windows Server 2008(R2), Windows Server 2012(R2), Windows Server 2016, Windows Server 2019, Windows 7/8.1/10 (32-bit)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/windows/puppet-agent-6.12.0-x86.msi
Download now
Ubuntu ver. 18.04 (amd64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/deb/bionic/puppet6/puppet-agent_6.12.0-1bionic_amd64.deb
Download now
Ubuntu ver. 16.04 (amd64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/deb/xenial/puppet6/puppet-agent_6.12.0-1xenial_amd64.deb
Download now
Ubuntu ver. 16.04 (i386)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/deb/xenial/puppet6/puppet-agent_6.12.0-1xenial_i386.deb
Download now
Ubuntu ver. 16.04 (ppc64el)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/deb/xenial/puppet6/puppet-agent_6.12.0-1xenial_ppc64el.deb
Download now
Debian ver. 10 (amd64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/deb/buster/puppet6/puppet-agent_6.12.0-1buster_amd64.deb
Download now
Debian ver. 9 (amd64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/deb/stretch/puppet6/puppet-agent_6.12.0-1stretch_amd64.deb
Download now
Debian ver. 9 (i386)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/deb/stretch/puppet6/puppet-agent_6.12.0-1stretch_i386.deb
Download now
Debian ver. 8 (amd64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/deb/jessie/puppet6/puppet-agent_6.12.0-1jessie_amd64.deb
Download now
Debian ver. 8 (i386)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/deb/jessie/puppet6/puppet-agent_6.12.0-1jessie_i386.deb
Download now
SLES ver. 15 (x86_64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/sles/15/puppet6/x86_64/puppet-agent-6.12.0-1.sles15.x86_64.rpm
Download now
SLES ver. 12 (ppc64le)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/sles/12/puppet6/ppc64le/puppet-agent-6.12.0-1.sles12.ppc64le.rpm
Download now
SLES ver. 12 (x86_64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/sles/12/puppet6/x86_64/puppet-agent-6.12.0-1.sles12.x86_64.rpm
Download now
SLES ver. 11 (x86_64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/sles/11/puppet6/x86_64/puppet-agent-6.12.0-1.sles11.x86_64.rpm
Download now
SLES ver. 11 (i386)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/sles/11/puppet6/i386/puppet-agent-6.12.0-1.sles11.i386.rpm
Download now
Solaris ver. 11 (i386)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/solaris/11/puppet6/[email protected],5.11-1.i386.p5p
Download now
Solaris ver. 11 (SPARC)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/solaris/11/puppet6/[email protected],5.11-1.sparc.p5p
Download now
Solaris ver. 10 (i386)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/solaris/10/puppet6/puppet-agent-6.12.0-1.i386.pkg.gz
Download now
Solaris ver. 10 (SPARC)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0repos/solaris/10/puppet6/puppet-agent-6.12.0-1.sparc.pkg.gz
Download now
AIX ver. 7.2, 7.1, and 6.1 (Power)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/aix/6.1/puppet6/ppc/puppet-agent-6.12.0-1.aix6.1.ppc.rpm
Download now
MacOS ver. 10.14 (Mojave) (x86_64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/apple/10.14/puppet6/x86_64/puppet-agent-6.12.0-1.osx10.14.dmg
Download now
MacOS ver. 10.13 (High Sierra) (x86_64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/apple/10.13/puppet6/x86_64/puppet-agent-6.12.0-1.osx10.13.dmg
Download now
MacOS ver. 10.12 (Sierra) (x86_64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/apple/10.12/puppet6/x86_64/puppet-agent-6.12.0-1.osx10.12.dmg
Download now
Amazon Linux ver. 1 (x86_64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/el/6/puppet6/x86_64/puppet-agent-6.12.0-1.el6.x86_64.rpm
Download now
Amazon Linux ver. 1 (x86_64)
https://pm.puppet.com/puppet-agent/2019.3.0/6.12.0/repos/el/6/puppet6/x86_64/puppet-agent-6.12.0-1.el6.x86_64.rpm
View ArticleIf you don’t have root command line access in Puppet Enterprise, you can run operating system commands on the master from the console. You can use a task in the support_tasks module to troubleshoot issues by checking the status of PE service ports, tailing the last 100 lines of PE service logs, and checking the permissions of your SSL directory.
Version and installation information
PE version: 2017.3 and later
OS: Any master OS
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
Use a task in the support_tasks module to run the following commands on the master from the console:
puppet_port_status - netstat -ln | grep '8140\|5432\|8170\|8143\|443 \|4433\|8081\|8150\|8151\|8142' - Checks the status of all of the listed Puppet Enterprise service ports.
puppetserver_log / puppetdb_log / console_log / orchestrator_log / syslog_log - Tail the last 100 lines of one PE service’s log file.
ssldir_permissions - Show permissions for all folders below ssldir.
To use the task:
Download and install the support_tasks module which includes the task for this solution.
In the console, in the Run section, click Task.
In the Task field, select task st0372.
Target your master node. From the list of target types, select Node list. Expand the Inventory nodes target. Enter the name of the master and click Search. From the list of results, select the master node.
Under Task parameters, select the parameter that matches the command you’d like to run from the dropdown list. If you’re tailing a log, select the log file you’d like to tail as the value.
Click Run job. After the task is completed, the command’s output will appear on the Job page.
View ArticleIf you're logged in to the Support Portal, you can use these links to download PE 2019.3:
The latest PE master
The latest PE agent
Learn more about 2019.3.
You can download 2019.1.4 and 2018.1.12 (our long term support release) from our Previous releases page. Learn more about 2019.1.4 and 2018.1.12.
View ArticlePuppet Enterprise uses a signed certificate to authenticate against the certificate authority (CA) built into Puppet Server. When the expiry date for the CA certificate has passed, your agents won’t be able to check in.You can use the Bolt plans and tasks in the puppetlabs-ca_extend module to:
Generate a CA certificate with a new expiry date.
Distribute the CA cert to your agents.
Check the expiry date of the CA cert and agent certificates.
Error messages and logs
During an agent run, if the CA certificate is expired, you get an error similar to the following:
Info: Not using expired certificate for ca from cache; expired at <DATE>
Error: Could not run: stack level too deep
Version and installation information
PE version: 2018.1 and later
PE OS: Any
PE installation type:
2019.2:All infrastructure on the master, no compilers.
2018.1 to 2019.1 : Monolithic (instructions not tested on a split installation)
Bolt version: 1.8.0 and later
Bolt OS: A *nix OS (to run Bolt plans)
Bolt installation: On a client machine or the master
Solution
Install the puppetlabs-ca_extend module and its dependencies using Bolt. Use the Bolt plans and tasks to:
Generate a CA certificate with a new expiry date.
Distribute the CA cert to your agents.
Check the expiry date of the CA cert and agent certificates.
Note: puppetlabs-ca_extend replaces the following puppetlabs-support_tasks module plans and tasks: kb0337a, kb0337b, kb0337f, and kb0337g.
View ArticleSupport might ask you to increase the log level from info (the default) to debug to gather additional information when troubleshooting a problem with PE services. For example, increasing the log level for pe-puppetserver is helpful for troubleshooting Code Manager and file sync issues.
Version and installation information
PE version: 2016.x to 2019.x (manual version), 2017.3.x and later (tasks versions)
OS: Any
Installation type: Any
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
Change the log level by editing the service's logback.xmlroot-level setting and restarting the service. You can do this manually, or you can install a module that uses a task to complete the steps.
Note: Debug-level logs fill up disk space quickly. When you finish gathering information, change the log level back to info. Before you send debug level logs to us, review them for sensitive information, such as hostnames and IP addresses.
To complete the steps manually:
The following command, uses puppet apply with an augeas resource to edit the service’s logback.xmlroot-level setting and restart the service.
Set the value for FACTER_level to: debug to increase the log level, or info to decrease the log level to the default level. Set the value for FACTER_service to the affected service: puppetserver, puppetdb, console-services, or orchestration-services
FACTER_level="<DEBUG OR INFO>" FACTER_service="<SERVICE NAME>" puppet apply -e "augeas {'toggle logging level': incl => \"/etc/puppetlabs/$::service/logback.xml\", lens => 'Xml.lns', context => \"/files/etc/puppetlabs/$::service/logback.xml/configuration/root/#attribute\", changes => \"set level \'$::level\'\"}~> service {\"pe-$::service\": ensure => running }"
Examples
Set pe-puppetserver to debug-level logging:
# FACTER_level="debug" FACTER_service="puppetserver" puppet apply -e "augeas {'toggle logging level': incl => \"/etc/puppetlabs/$::service/logback.xml\", lens => 'Xml.lns', context => \"/files/etc/puppetlabs/$::service/logback.xml/configuration/root/#attribute\", changes => \"set level \'$::level\'\"}~> service {\"pe-$::service\": ensure => running }"
Notice: Compiled catalog for pe-201642-master.puppetdebug.vlan in environment production in 0.31 seconds Notice: /Stage[main]/Main/Augeas[toggle logging level]/returns: executed successfully Notice: /Stage[main]/Main/Service[pe-puppetserver]: Triggered 'refresh' from 1 events Notice: Applied catalog in 35.43 seconds # tail -3 /var/log/puppetlabs/puppetserver/puppetserver.log 2016-12-15 16:11:23,076 INFO [async-dispatch-2] [p.e.s.m.master-service] Puppet Server has successfully started and is now ready to handle requests 2016-12-15 16:11:23,077 DEBUG [main] [p.t.internal] Registering SIGHUP handler for restarting TK apps 2016-12-15 16:11:23,079 DEBUG [async-dispatch-2] [p.t.internal] Lifecycle worker completed :boot lifecycle task; awaiting next task.
Set puppetserver to info-level logging:
# FACTER_level="info" FACTER_service="puppetserver" puppet apply -e "augeas {'toggle logging level': incl => \"/etc/puppetlabs/$::service/logback.xml\", lens => 'Xml.lns', context => \"/files/etc/puppetlabs/$::service/logback.xml/configuration/root/#attribute\", changes => \"set level \'$::level\'\"}~> service {\"pe-$::service\": ensure => running }"
Notice: Compiled catalog for pe-201642-master.puppetdebug.vlan in environment production in 0.32 seconds Notice: /Stage[main]/Main/Augeas[toggle logging level]/returns: executed successfully Notice: /Stage[main]/Main/Service[pe-puppetserver]: Triggered 'refresh' from 1 events Notice: Applied catalog in 41.88 seconds # tail -3 /var/log/puppetlabs/puppetserver/puppetserver.log 2016-12-15 16:13:56,983 INFO [async-dispatch-2] [p.e.s.a.analytics-service] Puppet Server Analytics has successfully started and will run in the background 2016-12-15 16:13:56,986 INFO [async-dispatch-2] [p.s.l.legacy-routes-service] The legacy routing service has successfully started and is now ready to handle requests 2016-12-15 16:13:56,988 INFO [async-dispatch-2] [p.e.s.m.master-service] Puppet Server has successfully started and is now ready to handle requests
To use the task:
Note: You can only complete the task-based steps in PE 2017.3.x and later.
You can run the task from the command line either by using the puppet task command or by using Bolt. If you’re using Bolt with the default SSH transport (and not the PCP protocol), you will avoid getting an error when Puppet services restart. However, either method will set the log level and restart the service.
To run the task that changes the PE service log levels, you must download and install the puppetlabs-support_tasks module, which includes the task for this solution.
Run the task on the command line:
On the master, run the task against the certname of the target infrastructure node. Change the value for loglevel to debug to increase the log level, or info to decrease the log level to the default level and change the value for service to the affected service: puppetserver, puppetdb, console-services, or orchestration-services
puppet task run support_tasks::st0009_change_pe_service_loglevel loglevel="<DEBUG OR INFO>" service="<SERVICE NAME>" -n $(puppet config print certname)
Note: The task restarts the target Puppet service. For console-services and orchestration-services, this restart will cause a connection error. You can safely ignore the error while the task continues to run in the background.
Example
To set the debug level for the puppetserver service, on the master run the following:
puppet task run support_tasks::st0009_change_pe_service_loglevel loglevel="debug" service="puppetserver" -n $(puppet config print certname)
Run the task using Bolt and SSH
To avoid errors when services are restarted during the task, use Bolt with the default SSH transport (and not the PCP protocol).
On the master, run the task against the certname of the target infrastructure node. Change the value for loglevel to debug to increase the log level, or info to decrease the log level to the default level and change the value for service to the affected service: puppetserver, puppetdb, console-services, or orchestration-services
bolt task run support_tasks::st0009_change_pe_service_loglevel loglevel="<DEBUG OR INFO>" service="<SERVICE NAME>" -n $(puppet config print certname) --modulepath="/etc/puppetlabs/code/environments/production/modules"
Example
To set the debug level for the console-services service, on the master node run the following:
bolt task run support_tasks::st0009_change_pe_service_loglevel loglevel="debug" service="console-services" -n <CONSOLE CERTNAME> --modulepath="/etc/puppetlabs/code/environments/production/modules"
View ArticleWhen you work with Support, we might ask you to gather troubleshooting information using the command puppet enterprise support. The command runs a script that collects a large amount of system information, compresses it into a tarball, and tells you the location of the tarball when it finishes running.
Note:
Our terminology changed when we released PE 2019.1. A master of masters is now called a master and a compile master is now called acompiler.
If you need to obfuscate the hostnames and IP addresses collected by the support script, read Obfuscate hostnames and IP addresses in Support Script output using SOScleaner.
Links to our documentation are to the latest version of PE documentation, please navigate to the correct version for your deployment.
If you are using Puppet Enterprise 2018.1.11, 2019.1.3, and 2019.2.0 and later, you can choose the diagnostics run by the support script.
PE version: 2016.2 to 2019.2
View instructions for earlier versions of Puppet Enterprise.
Run the support script on your master, PuppetDB, or console node:
As root on your master, PuppetDB, or console node, run:
puppet enterprise support
This will generate the tarball and tell you its location in an output similar to the following:
Support data is located at /var/tmp/puppet_enterprise_support_pe-master_20190704123456.tar.gz
Ask your Support Engineer for help sending the information. We use Box for large uploads. If you can't use Box, please let us know, we also have SFTP servers.
Run the support script on a compiler :
For PE 2016.5 to 2019.1
As root, run:
puppet enterprise support.
For PE 2016.2 to 2016.4
As root, run:
/opt/puppetlabs/server/data/enterprise/modules/pe_support_script/files/puppet-enterprise-support
Sending the information to Support
Ask your Support Engineer for help sending the information. We use Box for large uploads. If you can't use Box, please let us know, we also have SFTP servers.
Run the support script on an agent node:
The support script is designed to collect information from infrastructure nodes. However, it can also collect information from agent nodes, including Puppet and system logs and Puppet settings.
On Linux
As root, run:
puppet enterprise support
On Windows nodes for PE 2018.1.4 (and later 2018.1) or 2019.0 or newer
As a user that is a member of a local Administrators group, run:
puppet enterprise support
Sending the information to Support
Ask your Support Engineer for help sending the information. We use Box for large uploads. If you can't use Box, please let us know, we also have SFTP servers.
Run the support script on anAMQ broker:
For PE 2016.5 to 2018.1
As root, run:
puppet enterprise support.
Sending the information to Support
Ask your Support Engineer for help sending the information. We use Box for large uploads. If you can't use Box, please let us know, we also have SFTP servers.
For PE 2016.2 to 2016.4
As root, run:
/opt/puppetlabs/server/data/enterprise/modules/pe_support_script/files/puppet-enterprise-support
Sending the information to Support
Ask your Support Engineer for help sending the information. We use Box for large uploads. If you can't use Box, please let us know, we also have SFTP servers.
Related Links
Learn more about the information the support script collects, and the path to the script's code in our documentation.
View ArticleI can't log in to the console and am receiving an incorrect username/console error.
Error messages and logs
Error message
The username/password combination entered is incorrect. If you believe you have received this message in error, please consult the logs at /var/log/pe-console-services/console-services.log.
Log
2015-07-07 16:02:33,845 WARN [p.r.utils] Authentication failed.
Version and installation information
PE version: PE 3.8.x, 3.7.x
Instructions for more recent versions are in our documentation.
Solution:
Reset the admin password by running the update-superuser-password.rb utility script. You will then be able to log into the console.
Note: The script must be run from the command line of the console node. In a monolithic installation, the console is on the Puppet master node. In a split installation, the console is on a separate node from the Puppet master.
You will need to use the PE version of Ruby in /opt/puppet/bin/ruby.
Log into the console node as root.
The script file, update-superuser-password.rb, is not installed by default. It is contained in the PE 3.8.0 tarball. To get the file, go to Previous Release: PE 3.8.0 page and copy the link for your OS.
Run wget <PE TARBALL LINK> on the console node. Replace <PE TARBALL LINK> with your copied link.
Note: you will only be using the part of the link up to .gz, for example wget http://pm.puppetlabs.com/puppet-enterprise/3.8.0/puppet-enterprise-3.8.0-el-6-x86_64.tar.gz. This will download the PE 3.8.0 installer tarball.
Find the file path of the script in the tarball by running tar -tzf <TARBALL FILE NAME> | grep superuser, replacing <TARBALL FILE NAME> with your tarball file name.
Extract the script update-superuser-password.rb from the tarball by running tar -xf <TARBALL FILE NAME> <FILE PATH OF SCRIPT>, replacing <TARBALL FILE NAME> with your tarball file name and <FILE PATH OF SCRIPT> with the file path from step 4.
Navigate to the folder where the tarball extracted. Copy the script to /opt/puppet/bin by running cp update-superuser-password.rb /opt/puppet/bin/update-superuser-password.rb.
On the console node, run cd /opt/puppet/bin to navigate to the directory containing the script.
To reset your password run the following on the console node replacing <NEWPASSWORD> with your new console password:
q_puppet_enterpriseconsole_auth_password=<NEWPASSWORD> \
q_puppetagent_certname=$(puppet config print certname) /opt/puppet/bin/ruby \
update-superuser-password.rb
Note: When the script runs successfully there is no output.
Log into the console using the admin account and your new password.
To learn more about managing users, see Creating and Managing Users and User Rolesfor PE 3.8.x and 3.7.x.
View ArticlePuppet Enterprise uses x509 certificates to identify infrastructure nodes and provide secure communications using the TLS protocol. By default, these certificates are generated with a five-year lifespan, after which they expire and become unusable. Detect and update old certificates using the puppetlabs/certregen module.
For help with expired certificates in PE 2018.1 and later, please see Check and fix the expiry date for your CA certificate.
Error messages
When running puppet agent -t on a node with an expired certificate, messages similar to:
Info: Not using expired certificate for <name> from cache; expired at <date> UTC
Version and installation information
PE version: 3.x to 2017.3.x
Solution
Prerequisite:
On your certificate authority (CA) node:
Install the puppetlabs/certregen module.
Install the chloride gem: /opt/puppetlabs/puppet/bin/gem install chloride
Detect old certificates
List certificates that are close to expiration by running the following command as root on the Puppet CA node:
puppet certregen healthcheck
Check the command’s output for a "ca" (certificate authority) entry in the list of expiring certificates. An expiring or expired CA certificate should be addressed before moving on to any other entry in the healthcheck list.
For example:
# puppet certregen healthcheck
"ca" (SHA256) 92:C3:B8:B2:49:52:D5:29:25:7A:2A:99:35:DB:68:CD:65:E4:37:58:65:79:7C:23:A9:4F:DF:A5:8A:16:FE:C1
Status: expired
Expiration date: 2016-12-13 02:08:22 UTC
Create a new CA certificate
If the CA certificate has expired or is close to expiration, use the puppetlabs/certregen module to
Create a new CA certificate with an extended lifespan
Distribute the cert to Puppet agents using SSH
View the instructions here: https://forge.puppet.com/puppetlabs/certregen#usage
Regenerate other certificates
Regenerate certificates for other nodes by following the PE certificate regeneration workflows.
Note: Select the PE version that matches your infrastructure. Our documentation moved in 2017.3, so we've included several links to help you navigate to an appropriate version for you.
The PE 2017.3 certificate and SSL page links to the certificate regeneration instructions for master, PuppetDB, console, compile master, and agent nodes.
Master node: PE 2017.2
Database node: PE 2017.2
Console node: PE 2017.2
Compile Master nodes: PE 2017.2
All other nodes: PE 2017.2
View ArticleYou’ve identified a group of nodes where you’d like to enable or disable the Puppet daemon.
Note You can identify nodes where the daemon is enabled or disabled using the steps in Identifying nodes where the Puppet daemon is enabled or disabled.
Version and installation information
PE version: 2018.x
Solution
You can enable or disable groups of agents by installing a module and running tasks.
Get the module and tasks
Download and install the puppetlabs-support_tasks module which includes the tasks for this solution.
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
To enable agents:
By default, this will enable all agents, however, you can apply it to a set of nodes by editing the list of nodes created in step 1.
Create a list of disabled nodes in a file called nodefile.txt by running a task. On the master:
puppet task run support_tasks::st0285_find_disabled_agents --no-color --query 'nodes[certname] { }' | grep Finished | awk '{printf "\%s\%s",sep,$NF; sep=",\n"}' > nodefile.txt
Learn more about this task in Identifying nodes where the Puppet daemon is enabled or disabled.
Enable all of the agents in the list by running a task with the list of nodes as input. On the master, run:
puppet task run support_tasks::st0286_change_puppet_daemon_runmode puppet_mode=enable --nodes @nodefile.txt
Note: To enable nodes, the required parameter puppet_mode is set to enable.
To disable an agent:
Disable an agent by running a task. For example to disable an agent that you’re doing testing on, on the master, run:
puppet task run support_tasks::st0286_change_puppet_daemon_runmode puppet_mode=disable reason='down for testing' --nodes agent1.example.com
Note: To disable nodes, the required parameter puppet_mode is set to disable. The reason parameter is optional.
View ArticleWhen you’re having performance issues such as slowdowns due to inefficient code or slow external services, you can get troubleshooting information by enabling profiling on Puppet Server. Use the Puppet Profile Parser to make the information easier to read and understand.
Version and installation information
Puppet Enterprise version: 2018.1.x to 2019.2.x
Note: We cannot troubleshoot third-party software (FlameGraph and Jaeger).
Solution
When profiling is enabled, a large amount of information is written to Puppet Server’s main log file (/var/log/puppetlabs/puppetserver/puppetserver.log). Because the information is mixed with other information, it’s hard to interpret. You can use the Puppet Profile Parser to extract relevant information, structure it, and send it to graphing tools like FlameGraph and Jaeger.
To use Puppet Profile Parser, use the steps below to:
Enable profiling on some or all nodes.
Install the Puppet Profile Parser.
Install the graphing tool or tools that you want to use.
Use the Profile Parser to extract and format profiling information and use it with your graphing tools.
Enable profiling on catalog requests
Profiling is disabled by default in Puppet Server. You can enable it on specific nodes by using agent configuration or on all nodes by using Puppet Server configuration.
Enable profiling on specific nodes
To enable profiling on a few specific agent nodes:
On each node, in the agent section of puppet.conf set profile = true:
[agent]
profile = true
To enable management of this setting, and apply it to a larger number of nodes, use an ini_setting resource from puppetlabs/inifile.
ini_setting { 'puppet agent: enable profiling':
ensure => present,
path => '/etc/puppetlabs/puppet/puppet.conf',
section => 'agent',
setting => 'profile',
value => 'true',
notify => Service['puppet'],
}
Enable profiling on all requests
Profile all catalog requests on all nodes.
On the master, in puppet.conf in the master section, set profile = true.
[master]
profile = true
On the master, restart the Puppet Server service pe-puppetserver.
Install the Puppet Profile Parser
To run the Puppet Profile Parser, which is written in Ruby, you need a Ruby interpreter of version 2.0 or later. If it is not present, install the system Ruby interpreter. For example:
[root@master ~]# yum install -y ruby
Install the Puppet Profile Parser by downloading a release from its Releases page on Github, and making it executable. For example:
[root@master ~]# wget -O /usr/local/bin/puppet-profile-parser.rb \
https://github.com/Sharpie/puppet-profile-parser/releases/download/0.3.0/puppet-profile-parser.rb
(...)
[root@master ~]# chmod +x /usr/local/bin/puppet-profile-parser.rb
Install graphing tools
Install the graphing tool or tools you want to use. For a quick and simple installation, you can use FlameGraph, which is written in Perl, to visualize output from the Puppet Profile Parser. Jaeger is a collection of services and tools for performing distributed performance tracing.
Install FlameGraph
Many people use FlameGraph from its master branch on Github, but there is also a v1.0 release that you can install. Download it from the Releases page and extract the archive somewhere appropriate on your Puppet master, such as in /usr/local:
[root@master ~]# wget https://github.com/brendangregg/FlameGraph/archive/v1.0.tar.gz
[root@master ~]# tar zxf v1.0.tar.gz -C /usr/local/
Install Jaeger
There are several ways of deploying Jaeger, including a Kubernetes application.
You can find detailed instructions on deploying in the Jaeger documentation. If you have a Docker host, you can quickly deploy an all-in-one Jaeger stack, suitable for evaluating Jager, using a Docker image:
[croddy@docker ~]$ docker run -d -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
-p 16686:16686 -p 9411:9411 jaegertracing/all-in-one:latest
(...)
[croddy@docker ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
472b3ecc7add jaegertracing/all-in-one:latest "/go/bin/all-in-one-" About a minute ago Up About a minute 5775/udp, 5778/tcp, 14250/tcp, 0.0.0.0:9411->9411/tcp, 6831-6832/udp, 14268/tcp, 0.0.0.0:16686->16686/tcp ecstatic_galois
Use the Profile Parser to format and graph profiling data
Format and visualize the profiling data in your graphing tools.
Graph profiling data with FlameGraph
The Profile Parser has built-in support to format profiling information for FlameGraph. Parse the Puppet server logs in to FlameGraph format by using the puppet-profile-parser.rb command:
[root@master ~]# /usr/local/bin/puppet-profile-parser.rb -f flamegraph \
/var/log/puppetlabs/puppetserver/puppetserver.log \
| /usr/local/FlameGraph-1.0/flamegraph.pl --countname ms \
> /var/tmp/puppet_profile.svg
The graph is a Scalable Vector Graphics file, you can click on various rows to zoom to see details. FlameGraph graphs can reveal a lot about Puppet Server performance quickly.
In this example, FlameGraph wrote its output through shell redirection to /var/tmp/puppet_profile.svg. You can write the graph to whatever path you like, and serve it from a web server or copy it locally for viewing. It is clear that the catalog for master.example.com takes longer than the other nodes visible to the left:
Jaeger documentation
In another example, zooming in to the details for one node reveals that Puppet Server spent a lot of time on nested each functions:
This example command parses the local logs on your master. If you have compilers, either run the Puppet Profile Parser and FlameGraph on one or more compilers or collect the logs centrally for processing.
Send profiling data to Jaeger
For ongoing detailed monitoring of Puppet Server performance, use the Profile Parser to post data to Jaeger. Use a cron job or similar approach to run the Profile Parser periodically and post its formatted results to Jaeger.
Run the puppet-profile-parser.rb command below to parse the current logs and post them only once:
[root@master ~]# /usr/local/bin/puppet-profile-parser.rb -f zipkin \
/var/log/puppetlabs/puppetserver/puppetserver.log \
| curl -X POST -H 'Content-Type: application/json' \
http://jaeger.example.com:9411/api/v2/spans --data @-
To explore the detail of traces and search for traces matching specific conditions, browse the Jaeger web service. For example, to find all catalog requests that took longer than a certain amount of time:
This example command parses the local logs on your master. If you have compilers, run the Puppet Profile Parser on one or more compilers and post the results to Jaeger from the compilers.
For extensive details on deploying and using Jaeger, refer to the .
View ArticleI made configuration changes to my PuppetDB node. When I run puppet agent -t, I get a replace_facts error when Puppet attempts to retrieve a catalog from the master. How do I resolve it?
Error messages and logs
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed to execute '/pdb/cmd/v1?checksum=8bdbc1ce1e230a844baf32eacd03afa67bc2a5f8&version=4
&certname=thisnode.example.com&command=replace_facts' on at least 1 of the following 'server_urls': https://puppetdbnode.example.com:8081
When you restart pe-puppetdb, either directly on the command line, or indirectly through making configuration changes to PuppetDB, an init script runs to initialize it. But the init script returns control before the initialization of pe-puppetdb has completely finished. If you run puppet agent -t before the initialization is finished, you get this error.
Version and installation information
PE version: 2016.1.1
OS: All *nix
Installation type:Monolithic, split
Solution
Wait a few minutes for pe-puppetdb to completely restart, and then run puppet agent -t to apply the PuppetDB config changes you made.
View ArticleDue to regulatory compliance or other requirements, you might need to change the cipher suites that SSL-enabled Puppet Enterprise services use to communicate with other PE components.
Version and installation information
PE version: 2018.1.x and later
Solution
You can set SSL cipher suites for Puppet services or the console services using Hiera (preferred) or in the console. If you're using MCollective in PE 2018.1, you can also set cipher suites for ActiveMQ in the console.
Note: Settings in the console override settings in Hiera. Set the parameter in one or the other, but not both.
Warning: The examples in this article show formatting. Please replace the cipher suites in the examples with your own cipher suites.
Set SSL ciphers for Puppet services
The puppet_enterprise::ssl_cipher_suitesparameter sets SSL cipher suites for Puppet Server, PuppetDB, and orchestration services.
In Hiera
On the master, set cipher suites in your common.yaml using an array. For example:
puppet_enterprise::ssl_cipher_suites:
- 'SSL_RSA_WITH_NULL_MD5'
- 'SSL_RSA_WITH_NULL_SHA'
- 'TLS_DH_anon_WITH_AES_128_CBC_SHA'
- 'TLS_DH_anon_WITH_AES_128_CBC_SHA256'
In the console
Navigate to the PE Infrastructure group. In the puppet_enterprise class, set the ssl_cipher_suites parameter.
["SSL_RSA_WITH_NULL_MD5", "SSL_RSA_WITH_NULL_SHA", "TLS_DH_anon_WITH_AES_128_CBC_SHA", "TLS_DH_anon_WITH_AES_128_CBC_SHA256"]
Set SSL ciphers for console services
The puppet_enterprise::profile::console::proxy::ssl_ciphers parameter sets cipher suites for console services affecting traffic on port 443.
In Hiera
On the master, set RFC format cipher suites in yourcommon.yaml using an array. For example:
puppet_enterprise::profile::console::proxy::ssl_ciphers: "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA"
In the console
Navigate to the PE Console node group. In the puppet_enterprise::profile::console::proxy class in the parameter ssl_ciphers, add the following to the data section:
["ECDHE-RSA-AES128-GCM-SHA256",
"ECDHE-ECDSA-AES128-GCM-SHA256",
"ECDHE-RSA-AES256-GCM-SHA384",
"ECDHE-ECDSA-AES256-GCM-SHA384",
"DHE-RSA-AES128-GCM-SHA256",
"DHE-DSS-AES128-GCM-SHA256",
"kEDH+AESGCM",
"ECDHE-RSA-AES128-SHA256",
"ECDHE-ECDSA-AES128-SHA256",
"ECDHE-RSA-AES128-SHA",
"ECDHE-ECDSA-AES128-SHA"]
Additional Resources
Verify SSL protocols and cipher suites in use on Puppet Enterprise nodes
View ArticleIf you don’t have root command line access in Puppet Enterprise and/or you don’t have direct access to PE API integrations, you can make API calls from the console. You can use a task in the support_tasks module to create the Continuous Integration for Puppet Enterprise role, list PE authentication tokens, trigger a code deploy for all environments, and get the status of all PE services.
Version and installation information
PE version:2017.3 and later
OS: Any master OS
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
Use a task in the support_tasks module to run the following API calls in the console:
create_role_cd4pe - Create the Continuous Delivery for PE role.
list_tokens - List all PE authentication tokens.
manual_gitlab_webhook_hit - Trigger a code deployment to all environments using a simulated GitLab webhook. To use this command, you must have a valid authentication token in the default location (~/.puppetlabs/token).
get_all_services_status - Output the status of all PE services.
To use this task you must have an authentication token. If you don’t already have one, you can generate one using a task in the support_tasks module.
Download and install the support_tasks module which includes the task for this solution.
In the console, in the Run section, click Task.
In the Task field, select task st0373. Target your master. From the list of target types, select Node list. Expand the Inventory nodes target. Enter your master’s name and click Search. From the list of results, select your master.
Under Task parameters, select the parameter that matches the API call you’d like to make from the dropdown list.
Click Run job. After the task is completed, the API call’s output will appear on the Job page.
View ArticleIf you don’t have root command line access in PE, you can run Puppet commands in the console using a task in the support_tasks module. You can use these commands to see the actual settings values that a Puppet service uses, list your modules in all environments, display errors and alerts from PE services, and output optimized settings for PE services based on recommended guidelines.
Version and installation information
PE version: All supported versions
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
By using a task from the support_tasks module, you can run the following commands in the console:
puppet config_print - See the actual settings values that a Puppet service uses
puppet module_list --all - List installed modules in all environments
puppet infrastructure_status - Display errors and alerts from PE services
puppet infrastructure tune - Output optimized settings for PE services based on recommended guidelines
Download and install the support_tasks module which includes the task for this solution.
In the console, in the Run section, click Task.
In the Task field, select task st0371. Target your master. From the list of target types, select Node list. Expand the Inventory nodes target. Enter your master’s name and click Search. From the list of results, select your master.
Under Task parameters, select the parameter that matches the command you’d like to run from the dropdown list.
Click Run job. After the task is completed, the command’s output will appear on the Job page.
View ArticleMany of the solutions that our team provides in the support_tasks module require an authentication token. If you don’t have command line access on your master, you can generate an authentication token with a one day lifetime in the console.
Version and installation information
PE version: 2017.3 and later
OS: Any PE master OS
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
You can generate a token with a one day lifetime using a task in the support_tasks module. Run the task on the master or any other node that has PE client tools installed on it.
Download and install the support_tasks module which includes the task for this solution.
In the console, in the Run section, click Task.
In the Task field, select st0370.
The target node must be your master or any other node that has PE client tools installed on it. Select one target node using either a node query or a list.
Under Task parameters fill in values for the required parameters, user and password. Ensure that your user is a console user with an appropriate role for the task you’d like to run.
Learn more about user roles and generating a token using puppet-access.
Run the task by clicking Run job. The token is generated in the default location (~/.puppetlabs/token) and can be used for other tasks requiring a token. You can print the token at any time using puppet-access show
View ArticleWe’d like for you to stay up to date with Puppet Enterprise so that you get the latest bug and security fixes. There’s no built-in way to check if there's a new release of PE. However, you can use a task to check for and download the latest z release of your version of PE. If you’re using PE 2019.0 or later, you can run a scheduled task to make sure that you stay up to date.
The task does not upgrade your system. It downloads the most recent z release so that you can upgrade during your next maintenance window.
Version and installation information
PE version: 2017.3.x and later
Installation type: Any
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
By installing the puppetlabs-support_tasks module, you can use task st0362 to check if you are on the most up to date z release for your version of PE. For example, if you have PE 2019.0 installed, the task checks for the most recent z release of 2019.0, not 2019.1 or later versions.
If there’s a later z release available, you can download it to a location of your choice.
Note: This task requires that the master has internet access.
Download and install the puppetlabs-support_tasks module which includes the task for this solution.
If you’re using PE 2019.0 or later, to check for and download the z release, you can schedule task st0362 with the master as the job target.
This task takes one required parameter, dlpath, the download path on the master for the Puppet Enterprise installer. If the task fails, check the console output to make sure your path is correct.
If you’re using an earlier version of PE or if you’d prefer to run the task ad hoc, to check for and download the z release, run task st0362 from the console with the master as the job target.
This task takes one required parameter, dlpath, the download path on the master for the Puppet Enterprise installer. If the task fails, check the console output to make sure your path is correct.
View ArticleYou can use a task to clean or purge nodes by installing a module. Using a task is easier and takes less time than completing the manual steps. You can run the tasks from the command line directly, from the console, or using Bolt.
Version and installation information
PE version: 2017.3 to 2019.x
OS: RedHat, CentOS, OracleLinux, Scientific, SLES, Ubuntu
Installation type: Monolithic
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
To run the task, you must download and install the puppetlabs-support_tasks module which includes the tasks for this solution.
To run the tasks on the command line:
To clean the cert:
On the master, run:
puppet task run support_tasks::st0317a_clean_cert agent_certnames=<COMMA SEPARATED LIST OF CERTNAMES> -n <MASTER HOSTNAME>
For example:
puppet task run support_tasks::st0317a_clean_cert agent_certnames=pe-2016415-agent.platform9.puppet.net -n pe-201901-master.puppetdebug.vlan
If you have compile masters, prevent cleaned nodes from checking in again by refreshing the certificate revocation list (CRL) on your compile master nodes.
Run Puppet on all of your compile masters. On the master, run:
puppet job run -q 'resources { type = "Class" and title = "Puppet_enterprise::Profile::Master" and !(certname = "FQDN_of_your_MoM") }'
To purge a node:
On the master, run:
puppet task run support_tasks::st0317b_purge_node agent_certnames=<COMMA SEPARATED LIST OF CERTNAMES> -n <MASTER HOSTNAME>
For example:
puppet task run support_tasks::st0317b_purge_node agent_certnames=pe-2016415-agent.platform9.puppet.net -n pe-201901-master.puppetdebug.vlan
Your output should look similar to the following:
[nate@workstation]$ puppet task run support_tasks::st0317b_purge_node agent_certnames=agent1,agent2,agent3 -n master.corp.net
Starting job ...
New job ID: 24
Nodes: 1
Started on master.corp.net ...
Finished on node master.corp.net
agent2 :
result : Node purged
agent3 :
result : Node purged
agent1 :
result : Node purged
Job completed. 1/1 nodes succeeded.
Duration: 6 sec
If you have compile masters, prevent purged nodes from checking in again by refreshing the certificate revocation list (CRL) on your compile master nodes.
Run Puppet on all of your compile masters. On the master, run:
puppet job run -q 'resources { type = "Class" and title = "Puppet_enterprise::Profile::Master" and !(certname = "FQDN_of_your_MoM") }'
To run the tasks in the console:
Follow the steps in our documentation to run tasks in the console on a node list, choosing either task st0317a (to clean certs) or st0317b (to purge nodes). Run the task on your master and add the cert names of your nodes as a comma separated list of parameter values under agent_certnames.
If you have compile masters, prevent those nodes from checking in again by refreshing the CRL. Follow the steps in our documentation to run Puppet in the console on each compile master node.
To run the tasks using Bolt:
To clean the cert:
On the master, run:
bolt task run support_tasks::st0317a_clean_cert agent_certnames=< COMMA SEPARATED LIST OF CERTNAMES> -n <MASTER HOSTNAME>
For example:
bolt task run support_tasks::st0317a_clean_cert agent_certnames=pe-2016415-agent.platform9.puppet.net -n pe-201901-master.puppetdebug.vlan
If you have compile masters, prevent cleaned nodes from checking in again by refreshing the certificate revocation list (CRL) on your compile master nodes.
Run Puppet on all of your compile masters. On the master, run:
puppet job run -q 'resources { type = "Class" and title = "Puppet_enterprise::Profile::Master" and !(certname = "FQDN_of_your_MoM") }'
To purge a node:
On the master, run:
bolt task run support_tasks::st0317b_purge_node agent_certnames=<COMMA SEPARATED LIST OF CERTNAMES> -n <MASTER HOSTNAME>
For example:
bolt task runsupport_tasks::st0317b_purge_node agent_certnames=pe-2016415-agent.platform9.puppet.net -n pe-201901-master.puppetdebug.vlan
If you have compile masters, prevent purged nodes from checking in again by refreshing the certificate revocation list (CRL) on your compile master nodes.
On each compile master, run:
puppet job run -q 'resources { type = "Class" and title = "Puppet_enterprise::Profile::Master" and !(certname = "FQDN_of_your_MoM") }'
View ArticlePE version: 3.7.x, 3.8.x, 2015.2.x
Puppet Enterprise (PE) connects to external Lightweight Directory Access Protocol (LDAP) directory services through PE's Role-Based Access Control (RBAC) service, allowing you to use existing users and user groups that have been set up in your external directory service. You may need to troubleshoot RBAC when you are unsuccessful in importing external users and groups through LDAP into PE.
Collect the following information before starting to troubleshoot RBAC issues:
An LDIF (LDAP Data Interchange Format file) from a user or group with which you are attempting to authenticate, by running an ldapsearch query with OpenLDAP, or a dsquery query with Active Directory.
Your PE external directory settings. See "Getting Your PE External Directory Settings," below.
A debug-level console-services.log from an unsuccessful attempt to log into the console using an external user or group. See "Getting a Debug-Level console-services.log," below.
Comparing the LDIF with your PE external directory settings will reveal the most common RBAC issue: misconfiguration of your external directory settings in the console. The RBAC service sends queries to LDAP using external directory settings. If your external directory settings do not match the LDIF or are incorrect, you will not be able to import users and groups in PE. The most common misconfiguration issues are:
Mismatched or incorrect relative distinguished names (RDNs)
Mismatched or incorrect lookup values
Mismatched casing
Special characters that are not escaped (common in PE 3.7.0 and PE 3.7.1)
Getting your PE external directory settings
Log into the PuppetDB node (the master in a monolithic installation) as root, and run the following query to get your external directory settings.
Note: You can ignore permission denied errors.
For PE 3.7.x and 3.8.x:
sudo -u pe-postgres /opt/puppet/bin/psql -d pe-rbac -c "SELECT row_to_json(row) FROM ( SELECT id,display_name,help_link,type,hostname,port,ssl,login,connect_timeout,base_dn,user_rdn,user_display_name_attr,user_email_attr,user_lookup_attr,group_rdn,group_object_class,group_name_attr,group_member_attr,group_lookup_attr FROM directory_settings) row"
For 2015.2.x:
sudo -u pe-postgres /opt/puppetlabs/server/bin/psql -d pe-rbac -c "SELECT row_to_json(row) FROM ( SELECT id,display_name,help_link,type,hostname,port,ssl,login,connect_timeout,base_dn,user_rdn,user_display_name_attr,user_email_attr,user_lookup_attr,group_rdn,group_object_class,group_name_attr,group_member_attr,group_lookup_attr FROM directory_settings) row"
Read more about connecting to external directories: 2015.2, 3.8, 3.7
Getting a debug-level console-services.log
In addition to the misconfiguration issues listed above, a debug-level log from an unsuccessful attempt to log into the console can provide information about:
Connection and and authentication problems with LDAP directory services.
LDAP usernames that duplicate default PE user accounts. See the list of PE user accounts here: 2015.2.x, 3.8.x, 3.7.x.
To get a debug-level log from an unsuccessful attempt to log into the console using an external user or group, do the following steps:
Change logging in the console-services.log to debug-level by editing /etc/puppetlabs/console-services/logback.xml, and changing <root level="info"> to <root level="debug">
Restart PE console processes by running:
puppet resource service pe-console-services ensure=stopped
puppet resource service pe-console-services ensure=running
Attempt to log into the console as an external user that has been having trouble logging in. The debugging information about connection, authentication, or account name problems will be captured in the console-services.log file.
PE 3.7.x and PE 3.8.x: The resulting console-services.log file is located at: /var/log/pe-console-services/console-services.log.
PE 2015.2.x: The resulting console-services.log file is located at /var/log/puppetlabs/console-services.log.
Read more about logging and debugging: The log blog: An update on debugging Puppet Enterprise
View ArticlePE version: 2015.2.x, 3.8.x, 3.7.x, 3.3.2
Please see our documentation for later versions.
Tested version: 2015.2.0, 3.8.1, 3.7.0, 3.3.2
This article provides instructions on using the PE console to change the Java heap size for PuppetDB.
PuppetDB is limited by the amount of memory available to it. If it runs out of memory, it will start logging OutOfMemoryError exceptions and delaying command processing. If you are using PostgreSQL, we recommend that you allocate 128 MB of memory to the Java heap as a base, plus 1 MB for each Puppet node in your infrastructure. Change the amount of memory to suit your infrastructure with the following set of instructions.
Note: Ensure that you have sufficient free memory before increasing the memory that is used by PuppetDB.
Instructions for: PE 2015.2.x, 3.8.x, 3.7.x, 3.3.2
For 2015.2.x
To change the JVM heap size for PuppetDB, edit the JAVA_ARGS setting in PuppetDB’s init script config file. The location of this file varies by platform and package. For Redhat-like PE installations, the file is located in /etc/sysconfig/pe-puppetdb. For Debian/Ubuntu PE installations, the file is located in /etc/default/pe-puppetdb.
Edit JAVA_ARGS in the init script config file pe-puppetdb as follows:
To use 512MB of memory: JAVA_ARGS="-Xmx512m"
To use 1GB of memory: JAVA_ARGS="-Xmx1g"
To see the effects of your change, check the performance dashboard.
Additional resources:
Configuring the Java heap size
Scaling recommendations
Monitor the performance dashboard
For PE 3.8.x and PE 3.7.x:
To change the Java heap size for PuppetDB:
Log into the PE console.
Select Classification in the main navigation bar at the top of the page.
Select the PE PuppetDB node group.
Click the Classes tab.
From the puppet_enterprise::profile::puppetdb class parameters drop-down list, select java_args, and update the value to {"Xmx": "512m", "Xms": "512m"}. This will change the heap size to 512 MB.
Click Add parameter, and click the Commit changes button.
Log into the PuppetDB node as root. (On a monolithic installation, your PuppetDB node is the Puppet master.)
Run puppet agent -t to apply the change.
To see the effects of your change, check the performance dashboard.
Additional resources:
Configuring the Java heap size
Scaling recommendations
Monitor the performance dashboard
For PE 3.3.2:
Log into the PE Console.
Select Nodes in the main navigation bar at the top of the page.
Locate and select your PuppetDB node. (On a monolithic installation, your PuppetDB node is the Puppet master.)
Click Edit.
Under Classes, locate the pe_puppetdb::pe class, and click Edit parameters.
Locate the java_args parameter and update the value to {"-Xmx"=>"1024m", "-Xms"=>"1024m"}.
Click Done.
Click Update.
Log into your PuppetDB node as root.
Restart the pe-puppetdb service.
puppet resource service pe-puppetdb ensure=stopped
puppet resource service pe-puppetdb ensure=running
Run puppet agent -t.
To see the effects of your change, check the performance dashboard.
Additional resources:
Configuring the Java heap size
Scaling recommendations
Monitor the performance dashboard
View ArticleWhen Facter's built-in facts cause issues with third party software or PE, you might want to override or disable them.
For example:
The ec2 structured fact might cause repeated connections to Openstack metadata nodes. When a fact is very large it might cause performance issues in Puppet Enterprise such as slow Puppet runs.
Solaris systems automount each user's home directory, the mountpoints fact, which stores all filesystems mounted on each system, becomes very large.
Version and installation information
PE version: 2015.3.x to 2017.3.x
Installation type: Monolithic
Solution
Note: Select the version that matches your infrastructure. Our PE documentation moved when we released PE 2017.3, so in some cases we've included several links to help you navigate to an appropriate version for you.
In PE 2016.4.5 to 2017.3.x, block the generation of the following facts using the blocklist setting in facter.conf : ec2_metadata, ec2_userdata, mountpoints, filesystems, or partitions. This solution prevents all facts within the listed groups from being resolved when Facter runs.
If you can’t use that solution, complete the following steps to create a module that overrides the fact. This example uses the mountpoints fact in a version of PE earlier than 2016.4.5.
The new module uses a custom fact to replace the facts value with an empty string. You can override a different fact by replacing mountpoints in steps 3 and 7 with a different fact.
Create a module: puppet module generate --skip-interview user-factoverride
Create a directory for your fact in the module: mkdir -p factoverride/lib/facter
Create a custom fact in the file factoverride/lib/facter/mountpoints_override.rb with the following contents:
Facter.add(:mountpoints) do
has_weight 100
end
If you do not use Code Manager: Deploy the factoverride module to the module directory for the desired environment.
If you use Code Manager: Deploy the module using the Code Manager workflow ( PE 2017.2, PE 2017.3 ) instead.
Log into the master as root.
Copy the new fact to the cache directory with pluginsync by running puppet agent -t. Your output should be similar to the following:
Info: Using configured environment 'production' Info: Retrieving pluginfacts Info: Retrieving plugin Notice: /File[/opt/puppetlabs/puppet/cache/lib/facter/mountpoints_override.rb]/ensure: defined content as '{md5}7392c29fb7fe442b55196fc7aa8a92fb' Info: Loading facts Info: Caching catalog for puppetmaster.domain.com Info: Applying configuration version '1c4bf0f741772f86cc7d91479a85b9d6b78f455d'
Verify that the override was successful by checking that mountpoints does not return any output. As root on the master run facter -p mountpoints.
View ArticleTo check the table sizes of the PE databases you can enter commands manually or download and install the puppetlabs-support_tasks module and run the task st0287.
Note: This solution requires a Puppet managed PostgreSQL instance. It will not work on a PostgreSQL instance that is not managed by Puppet.
Version and installation information
PE version: All (manual), 2017.3 and later (task)
Installation type: PE with a Puppet managed PostgreSQL instance
Solution
To run the commands manually:
On the node running the the pe-postgresql service:
Log in as the pe-postgres user: su - pe-postgres -s /bin/bash -c "/opt/puppetlabs/server/bin/psql"
Display a list of databases tables that you can connect to. Run:
SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname)) AS size FROM pg_database;
Connect to a database. Choose a database table from the list of tables output. Run \c <DATABASE NAME> For example, \c pe-puppetdb
Display all of the chosen database’s tables and their sizes. Run: \di+
To use a task:
Install a module and use a task to check the table sizes of PE databases. This task completes the manual steps for you.
Note You can use this solution for PE 2017.3 and later.
If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
When you run the task, make sure to run it on nodes running the pe-postgresql service. When you do, you get the output Pe-postgresql service detected, will continue to run. and the output from the selected DB table(s). If you run the task on a node that isn’t running pe-postgresql you get the output: Node not running pe-postgresql service, please select node which is.
Complete the following steps
Install the puppetlabs-support_tasks module from the Forge: https://forge.puppet.com/puppetlabs/support_tasks.
Navigate to the console, click Task.
Select task st0287.
In the Parameter dbname enter the PE database you’d like information about as its Value. Use one of the following:
pe-puppetdb, pe-postgres, pe-classifier, pe-rbac, pe-activity, pe-orchestrator, postgres or all
Under inventory, select PQL query and enter the following query:
resources[certname] { type = "Service" and title = "postgresqld" }
Run the job. When the task has finished you will see -- ST#0287 Task ended: <Date/Time Stamp> -- at the end of the output.
View ArticleThe Puppet agent daemon is disabled on some of your nodes. You'd like to know which nodes are enabled and which nodes are disabled in the command line or in the console.
Version and installation information
PE version: 2017.3.x to 2019.2.x
OS: Unix, Windows
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
Identify which nodes are enabled and disabled by running tasks in the support_tasks module.
Note: You can use the output from the tasks in this article to enable the daemon on nodes where it is disabled.
Download and install the support_tasks module which includes the tasks for this solution.
Note: The Windows task was added in version 1.1 of the support_tasks module. If you have an earlier version installed, update the module.
Use the appropriate task in the command line or in the console.
PE version
OS
task number
2019.0.x to 2019.2.x
Unix or Windows
st0285
2017.3.x to 2018.1.x
Unix
st0285a
2017.3.x to 2018.1.x
Windows
st0285b
On the command line:
Select the appropriate task. If you need to run two tasks, run the second task when the first task is finished running.
Create a list of disabled nodes in a file (nodefile.txt) by running a task on the master. For example:
puppet task run support_tasks::st0285_find_disabled_agents --no-color --query 'nodes[certname] { }' | grep Finished | awk '{printf "\%s\%s",sep,$NF; sep=",\n"}' > nodefile.txt
To show a list of disabled nodes on the screen in PE 2019.2, on the master, run:
puppet task run support_tasks::st0285_find_disabled_agents --no-color --query 'nodes[certname] { }' | grep Finished | awk '{printf "\%s\%s",sep,$NF; sep=",\n"}' && echo
In the console:
Follow the instructions in our documentation for using tasks in the console. You can select nodes using a PQL query, node list, or node group.Select the appropriate task. You can't run two tasks at the same time. If you need to run two tasks, run the second task when the first task is finished running.
For example, to select all nodes, under Inventory select PQL and under common queries select All nodes. Click Submit query and click Refresh to update the node results. Click Run job.
Example output:
The output for enabled nodes contains an error so that you can sort enabled and disabled nodes in the console:
Started on pe-201813-master.platform9.puppet.net ...
Failed on pe-201813-master.platform9.puppet.net
Error: Task finished with exit-code 1
STDOUT:
Puppet agent is enabled
Job failed. 1 node failed, 0 nodes skipped, 0 nodes succeeded.
Duration: 0 sec
The output for a disabled node:
Started on pe-201813-master.platform9.puppet.net ...
Finished on node pe-201813-master.platform9.puppet.net
STDOUT:
Puppet agent is disabled
Job completed. 1/1 nodes succeeded.
Duration: 0 sec
View ArticleAfter enabling package data collection using the package_inventory_enabled parameter, Puppet runs fail with No such file or directory error messages.
Error messages and logs
When running Puppet on a node, you get the following errors:
Could not set 'present' on ensure: No such file or directory @ rb_sysopen - /opt/puppetlabs/puppet/cache/state/package_inventory_enabled at /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/profile/agent.pp:77
Could not set 'present' on ensure: No such file or directory @ rb_sysopen - /opt/puppetlabs/puppet/cache/state/package_inventory_enabled at /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/profile/agent.pp:77
Wrapped exception: No such file or directory @ rb_sysopen - /opt/puppetlabs/puppet/cache/state/package_inventory_enabled
change from absent to present failed: Could not set 'present' on ensure: No such file or directory @ rb_sysopen - /opt/puppetlabs/puppet/cache/state/package_inventory_enabled at /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/profile/agent.pp:77
Source: /Stage[main]/Puppet_enterprise::Profile::Agent/File[/opt/puppetlabs/puppet/cache/state/package_inventory_enabled]/ensure
File: /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/profile/agent.ppLine: 77
Version and installation information
PE version: 2016.4.x to 2018.1.x
OS: *nix
Installation type: Monolithic or split
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
This issue occurs when statedir is not set to the default location, /opt/puppetlabs/puppet/cache/state in puppet.conf. The default location of these directories changed in PE 2015.2. If you upgraded from an older version of PE, statedir,vardir, and rundir might be set to something other than the default location.
You can fix this manually or by installing a module and using a task. Note that you can only complete the task-based solution in PE 2017.3.x and later on Unix nodes.
To fix the issue manually:
On one node or a handful of nodes
In puppet.conf, change the location for each directory to its default value.
On each affected agent node:
Edit /etc/puppetlabs/puppet/puppet.conf to remove the following settings: [vardir](https://puppet.com/docs/puppet/5.3/configuration.html#vardir), rundir, and statedir.
Restart the puppet service: puppet resource service puppet ensure=stopped puppet resource service puppet ensure=running
Run puppet agent -t to verify that the issue has been resolved.
On many nodes
Manage the location of vardir, rundir, and statedir using the puppetlabs-inifile module.
Install or add puppetlabs-inifile to your deployment.
Add the following to a class.
$puppet_conf = '/etc/puppetlabs/puppet/puppet.conf'
ini_setting { 'puppet.conf remove vardir':
ensure => absent,
path => $puppet_conf,
section => 'main',
setting => 'vardir',
}
ini_setting { 'puppet.conf remove rundir':
ensure => absent,
path => $puppet_conf,
section => 'main',
setting => 'rundir',
}
ini_setting { 'puppet.conf remove statedir':
ensure => absent,
path => $puppet_conf,
section => 'main',
setting => 'statedir',
}
Add the class to a node group with the affected nodes in it.
To use a task to solve this issue:
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time
Resolve the error by installing a module and using a task to remove the vardir, statedir, and cachedir settings from puppet.conf. You can only complete the task-based solution in PE 2017.3.x and later. It will fix the issue only for Unix nodes.
Download and install the puppetlabs-support-tasks module which includes the task for this solution.
In the console, click Task, under Task select support_tasks:st0236_set_cache_paths_to_default. Under Inventory use a PQL query, node list, or node group to select the affected nodes. Click Run Job.
The task will exit with the following output:
Message
Outcome
The output of `puppet apply` followed by `ST#0236 Task ended`
The task removed the settings from the agent's configuration file.
`No changes necessary`
The node is not configured with the settings causing the issue, no changes are made.
View ArticleI am changing the hostname of my master. How do I make the corresponding updates to Puppet Enterprise?
Please follow the steps in our documentation.
Note:We changed the task number for this solution to st0263 in version 1.1.1 of the puppetlabs-support_tasks module on 2020 17 January. The task number in earlierversions of the module was kb0263. We encourage you to use the most up to date version of the module.
View ArticleCode deployments in Puppet Enterprise fail with a "Cannot lock" or "Index is locked" error.
Error messages
2017.x and 2018.1.x
2017-09-07T11:27:38.813759-04:00 frup7682 puppetserver[6337]: Exception in thread "main" java.lang.IllegalStateException: Index is locked. This can occur if the server crashed the last time it was running, and was unable to clean-up the index.lock file. This can be fixed by removing /opt/puppetlabs/server/data/puppetserver/filesync/storage/puppet-code/test.git/index.lock
2016.x
Error while sync'ing live code directory","cause":"org.eclipse.jgit.api.errors.JGitInternalException: Cannot lock /opt/puppetlabs/server/data/puppetserver/filesync/client/puppet-code.git/modules/environments/common_r416_production/index"
A lock file is used during code deployment, to prevent the code staging directory from being overwritten. When Puppet Server shuts down during code deployment, the lock file is not cleaned up causing subsequent deployments to fail.
Version and installation information
PE version: 2016.x, 2017.x, 2018.1.x
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
Fix the issue by removing the lock file. You can fix this manually or by installing a module and using a task. You can only complete the task-based solution in PE 2017.3.x and later.
To fix the problem manually:
Stop Puppet Server service. On the master, run:
puppet resource service pe-puppetserver ensure=stopped
Remove the lock file. On the master, run:
find /opt/puppetlabs/server/data/puppetserver/filesync/ -type f -name 'index.lock' -delete
Start Puppet Server service. On the master, run:
puppet resource service pe-puppetserver ensure=running
With the lock file removed, you should be able to deploy code.
To use a task to solve this issue:
You can only complete the task-based solution in PE 2017.3.x and later.
Download a module and run a task that resolves the error by stopping the pe-puppetserver service, removing file sync locks, and starting the pe-puppetserver service.
Download and install the puppetlabs-support_tasks module which includes the task for this solution.
In the console, click Task. Under Task, select the task support_tasks:st0267_clear_file_sync_locks.
Under Inventory, select Node list. In the search field, start typing in the names of the master of masters node, and click Search. Select the master of master node.
Click Run job.
The task will exit with the following output:
<td>Message</td>
<td>Outcome</td><td>`Puppet master node detected`</td>
<td>The task removed the locks and started the `pe-puppetserver` service.</td><td>`Not a Puppet master node exiting` </td>
<td>The node the task was executed on was not the master, no change was made. </td>
View ArticleI want to deploy code, but I don’t have access the command line.
Version and installation information
PE version: 2017.3 and up
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
You can deploy code to one environment or all available environments from the console by installing a module and running a task.
Before you begin:
To use this task, you must have a valid active RBAC token. If you do not, you can create one by following the steps in our documentation or using a task in the support_tasks module.
To use this task:
Download and install the puppetlabs-support_tasks module which includes the task for this solution.
In the console, in the Run section, click Task.
In the Task field, select st0298.
Under Task parameters in the Value field, enter the environment to deploy code to. You can enter one environment by entering its name or all environments by entering all.
View ArticleIf you need to regenerate the certificate for your monolithic master, you can use a task to automate the steps in our documentation to make the process quicker and easier.
Version and installation information
PE version: 2018.1.x
In PE 2018.1.8 and later, it's easy and quick to regenerate your monolithic master using the command puppet infra run regenerate_master_certificate. You can also use the command in PE 2019.0.3 and later and PE 2019.1.
Installation type: monolithic
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
You can run the task from the command line either by using the puppet task command or by using Bolt. If you’re using Bolt with the default SSH transport (and not the PCP protocol), you will avoid getting an error when Puppet services restart. However, either method will regenerate the certificate.
Before you begin: Ensure all DNS alt names for the master are present in/etc/puppetlabs/enterprise/conf.d/pe.conf in the pe_install::puppet_master_dnsaltnames parameter.
To run the task that regenerates the certificate, you must download and install the puppetlabs-support_tasks module, which includes the task for this solution.
Run the task on the command line:
On the master, run the task against the certname of the PE master.
puppet task run support_tasks::st0299_regen_master_cert -n $(puppet config print certname)
Note: The task restarts all Puppet services, which causes a connection error. You can safely ignore the error while the task continues to run in the background. To check if the task is complete, tail /var/log/messages. When you see output from puppet agent -t in the system log similar to the following, the task is complete.
# tail /var/log/messages
Aug 15 09:08:28 oldhostname systemd: Reloading pe-orchestration-services Service.
Aug 15 09:08:29 oldhostname systemd: Reloaded pe-orchestration-services Service.
Aug 15 09:08:29 oldhostname puppet-agent[4780]: (/Stage[main]/Puppet_enterprise::Profile::Orchestrator/Puppet_enterprise::Trapperkeeper::Pe_service[orchestration-services]/Service[pe-orchestration-services]) Triggered 'refresh' from 1 event
Aug 15 09:08:34 oldhostname puppet-agent[4780]: Applied catalog in 19.16 seconds
Run the task using Bolt and SSH
To avoid errors when services are restarted during the task, use Bolt with the default SSH transport (and not the PCP protocol). On the master, run the task against the certname of the PE master:
bolt task run support_tasks::st0299_regen_master_cert -n $(puppet config print certname) --modulepath="/etc/puppetlabs/code/environments/production/modules"
Troubleshooting:
If the DNS alt names in pe.conf and the cert don't match, the task returns an error with the DNS alt names that are missing from pe.conf:
`hostname.domain.com' is set up as a DNS alt name in the existing certificate, but is not present in the 'pe_install::puppet_master_dnsaltnames' setting of '/etc/puppetlabs/enterprise/conf.d/pe.conf'. Please add it to continue, or use the 'dnsaltname_override' task parameter to skip this check.
If you have DNS alt names in the cert that aren’t being used, you can force the task to only use the names specified in pe.conf.
Using a task on the command line:
On the master, run:
puppet task run support_tasks::st0299_regen_master_cert --params '{"dnsaltname_override": true }' -n $(puppet config print certname)
Using Bolt to run the task:
On the master, run:
bolt task run support_tasks::st0299_regen_master_cert --params '{"dnsaltname_override": true }' -n $(puppet config print certname) --modulepath="/etc/puppetlabs/code/environments/production/modules"
View ArticleImportant: Puppet Enterprise 2018.1 is the last release to support Marionette Collective, also known as MCollective. While PE 2018.1 remains supported, Puppet will continue to address security issues for MCollective. Feature development has been discontinued. Future releases of PE will not include MCollective. For more information, see the Puppet Enterprise support lifecycle.
To prepare for these changes, migrate your MCollective work to Puppet orchestrator to automate tasks and create consistent, repeatable administrative processes. Use orchestrator to automate your workflows and take advantage of its integration with Puppet Enterprise console and commands, APIs, role-based access control, and event tracking.
The puppet-agent packages for Linux include a logrotate configuration for MCollective that restarts the agent service as part of the rotation. Using logrotate is not required, since MCollective handles log rotation on its own. When logrotate runs on a large number of Linux nodes simultaneously, the service restart disrupts all in process MCollective operations, and creates a thundering herd of MCollective re-connections. You can improve MCollective’s stability by removing the logrotate configuration from the agent.
Version and installation information
PE version: Resolved in PE 2016.4.11, 2017.3.6, and 2018.1. Affects PE versions earlier than 2016.4.11, 2016.5.x, and 2017.1.x to 2017.3.5.
OS: Linux
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
You can fix this issue by running a manifest on each node running MCollective or by downloading a module and running a task. Note that you can only complete the task-based solution in PE 2017.3.x.
To run a manifest:
On each node running MCollective:
Create a manifest named mco_remove_logrotate.pp with the following contents:
if ($::facts['kernel'] == "linux") {
# An exec is used to avoid failures when logrotate is not installed.
exec {'Remove MCollective logrotate configuration':
command => '/bin/rm /etc/logrotate.d/mcollective',
onlyif => '/bin/test -s /etc/logrotate.d/mcollective',
}
}
Apply the manifest by running puppet apply mco_remove_logrotate.pp
To use a task:
This task disables unneeded logrotate configuration for MCollective. You can only complete the task-based solution in PE 2017.3.x.
Download and install the puppetlabs-support_tasks module which includes the task for this solution.
In the console, click on Task. Under Task select support_tasks:st0244_disable_mco_logrotate. Under Inventory, select PQL query, node list, or node group, whichever best allows you to target all nodes running MCollective. Click Run job.
Learn more about running tasks using PQL queries, node lists, or node groups in our documentation.
When the task is completed, it exits with the output -ST#0244 Task ended <date>
View ArticleYou can use tasks to stop a thundering herd on Unix (including MacOS) or Windows nodes. On targeted agent nodes, these tasks stop puppet agent service and restart it, delaying agent check-in for each node from anywhere between 1 second up to the configured runinterval.
Note: Each task might up to the time of the runinterval to complete. Each node is offline for 10 or 15 seconds, but the console will be busy while the task is running.
Version and installation information
PE version: 2017.3.x and later
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
Install a module that uses tasks to stop a thundering herd on Unix and Windows nodes.
Download and install the puppetlabs-support_tasks module which includes the tasks for this solution.
Run the tasks on all of your nodes or a subset of your nodes by following the instructions in our documentation for targeting and using tasks in the console. The linked instructions are for PE 2018.1, make sure you’re using the right version for your deployment.
For PE 2019.0.x and later
Select task st0346_herd_resolver for Unix and Windows nodes.
Run the task on the target nodes.
For PE 2018.1.x and earlier:
Select either task st0346a for Unix nodes or st0346b for Windows nodes. You can’t run both at the same time.
Run the task on the target nodes.
If you need to run the other task, run it when the first task is complete.
To prevent future thundering herds, use the steps in the following articles:
Note: Choose only one of these solutions.
If you’re using PE 2017.3.1 or later, prevent future thundering herds with the solution in Prevent a thundering herd: Use max-queued-requests.
If you’re able to use Cron, you can spread out agent catalog requests by running Puppet out of Cron. Read Prevent a thundering herd: Run Puppet out of Cron in Puppet Enterprise 2015.2.x to 2019.1.x.
View ArticleIf your agents are configured to run using cached catalogs, facts are not up to date on the master. You can make sure that fact-based classification and automation behave as expected by uploading facts to the master using a scheduled task (in PE 2019.0 and later) or an ad hoc task (in PE 2018.1 and later). Even if your agents aren’t configured to run using cached catalogs, you can use a task to upload facts at any time.
Version and installation information
PE version: 2018.1 and later, 2019.0 and later for scheduled tasks
Installation type: All
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
By using a task to call the puppet facts upload function, you can upload facts for all nodes or for selected nodes. To make sure that all facts are uploaded, we recommend that you run the task on all nodes. However, you might prefer to run the task on selected nodes. For example, if you only have a few nodes that are configured to run using cached catalogs, you might want to run the task only on those nodes.
Download and install the puppetlabs-support_tasks module which includes the tasks for this solution.
Run the task.
If you’re using PE 2019.0 or later, you can run a scheduled task or run the task ad hoc
You can run a scheduled task on all nodes or on selected nodes. Select the task st0361_uploading_facts for both Windows and Linux nodes.
You can also run task st0361_uploading_facts ad hoc on all nodes or on selected nodes. To run the task on all nodes, run the task on the All Nodes node group. To run the task on selected nodes, pick a method from our documentation to run the task in the console.
If you’re using an earlier version of PE, you can run tasks ad hoc
To run the task on all nodes, run the task on the All Nodes node group. To run the task on selected nodes, pick a method from our documentation to run the task in the console. Select the appropriate task for your operating system family, st0361a for Linux or st0361b for Windows. If you need to run both tasks, run one task first and then the other.
View ArticleWhen I run puppet agent -t, I get a (SystemStackError) stack level too deep error.
Error messages and logs
Error message:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Internal Server Error: org.jruby.exceptions.RaiseException: (SystemStackError) stack level too deep
Logs:
In the Puppet Server log,
for PE 2015.2.x to 2019.1.x: /var/log/puppetlabs/puppetserver/puppetserver.log
for PE 3.8.x: /var/log/pe-puppetserver/puppetserver.log:
2015-11-02 11:17:10,496 ERROR [p.p.ringutils] Exception while handling HTTP request org.jruby.exceptions.RaiseException: (SystemStackError) stack level too deep at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613) ~[puppet-server-release.jar:na] at org.jruby.RubyEnumerable.inject(org/jruby/RubyEnumerable.java:866)
You received this error because the pe-puppetserver process exceeded the JVM stack size.
Version and installation information
PE version: 3.8.x to 2019.1.x
Solution
To resolve the error, increase the JVM stack size from 1MB (default) to 2MB by completing the following steps or by installing a task and using a module. You can only complete the task-based solution in PE 2017.3 and later.
Note: Our terminology changed when we released PE 2019.1. A master of masters is now called a master and a compile master is now called acompiler.
If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
To increase the JVM stack manually:
Open /etc/sysconfig/pe-puppetserver (on EL-based systems) or /etc/default/pe-puppetserver (on Ubuntu). Add an -Xss2m option to the existing JAVA_ARGS, for example:
JAVA_ARGS="-Xms<XMS SIZE> -Xmx<XMX SIZE> -Xss2m"
Note: Add the -Xss option if it doesn't exist, edit it if it already exists. No other changes to JAVA_ARGS are necessary.
Restart pe-puppetserver:
puppet resource service pe-puppetserver ensure=stopped ; puppet resource service pe-puppetserver ensure=running
Verify that the change resolved the error by running puppet agent -t.
To use a task:
Increase the JVM stack size from 1MB (default) to 2MB by installing a module and using a task.
Note that you can only complete the task-based solution using PE 2017.3 and later.
Download and install the puppetlabs-support_tasks module which includes the task for this solution.
In the console, click on Task. Select the task support_tasks:st0149_Resolve_Stack_Level_Too_Deep. Under Inventoryin PE 2018.1.x or Select targets in PE 2019.x, select Node List. Add only master or compiler nodes (nodes that run Puppet Server) to the list using the instructions in our documentation for running tasks on node lists in the console.
Select Run Job. The task will exit with one of the following:
Message
Result
Puppetmaster node detected - followed by -ST#0149 Task ended
The task increased the Puppet Server’s JVM stack size to 2MB.
Argument Already Present
A value is already configured for the JVM stack. Either this task has already been executed or this task will not resolve your issue.
Not a Puppet MASTER node exiting
The task was executed on a node that is not a Puppet Server, no change was made.
For example, in PE 2019.1, when the task changes the Puppet Server's JVM stack size, the output is:
Puppetmaster node detected - EL [mNotice: /Service[pe-puppetserver]/ensure:
ensure changed 'running' to 'stopped'[0m service { 'pe-puppetserver':
ensure => 'stopped', } [mNotice: /Service[pe-puppetserver]/ensure:
ensure changed 'stopped' to 'running'[0m service { 'pe-puppetserver':
ensure => 'running', } -ST#0149 Task ended 1558107826 --
View ArticleWe maintain SFTP servers to provide an upload option for files that are too large to attach to Support tickets and for customers who cannot use third-party file-sharing services.
This article describes:
How to get an SFTP account
SFTP servers and their SSH host fingerprints
How to upload the output of the support script directly to an SFTP server from PE in 2018.1.8 and later and 2019.0.3 and later
Note: If you’re using a version of the puppetlabs-support_tasks module older than 1.1.1 (2020 17 January), please update the module to use these steps. We renumbered all the tasks in the module at that time.
Solution
To upload a file to one of our SFTP servers, you need an account created and configured by our Support team. To authorize access to the account, you need an SSH public key. We will ask you to provide an SSH public key.
Files that you upload to our SFTP servers are read-only and accessible only by our Support team. Files are automatically deleted after seven days, but if you need them deleted at a different time, let us know. We’ll take care of it.
Our SFTP servers
SFTP upload services are available at the following locations:
Primary SFTP server
The primary SFTP server is available at customer-support.puppetlabs.net and has the following SSH host fingerprints:
# SHA256 hashes
SHA256:FBe09SAyXBiLrWyHgrc7GrLR+hK0sB23VUjELt89Gjg (RSA)
SHA256:elWbA2dwlXKLd4q43SfFbSp1Dw2FnbLFufsJ4ITn5TU (ECDSA)
SHA256:3fBFrK3hOAYrAXLHnPTvOFUUsNtAYaSxX3l59RBt3dY (ED25519)
# MD5 hexadecimal hashes
MD5:7e:83:fa:91:4d:e0:1a:fb:04:8f:c5:cb:83:15:b3:b9 (RSA)
MD5:1a:6e:f3:d0:de:14:2b:7c:00:0f:c6:69:14:b9:3e:64 (ECDSA)
MD5:bf:5c:3d:e1:61:1e:e4:da:57:9e:2c:73:d3:4f:ed:26 (ED25519)
Asia-Pacific SFTP server
A SFTP server optimized for fast file transfer within the Asia-Pacific region is available at customer-support-syd.puppetlabs.net and has the following SSH host fingerprints:
# SHA256 hashes
SHA256:Ls9S/Ag91mj+h77ONqFfwqbPq+ubpB5IYn3l5JlHvwU (RSA)
SHA256:AdqX1vx8Qv2awDBsErYV8/WMjHQQD2GEKU/TQ+Yko7I (ECDSA)
SHA256:gl9m1fjBAUUFIeHce/FHyKF73HA7S7g29qFoOkq1UAY (ED25519)
# MD5 hexadecimal hashes
MD5:e6:0b:46:4c:3a:3b:d2:64:8a:cf:59:a6:3d:36:d6:95 (RSA)
MD5:3a:7c:52:43:e6:d3:35:49:2d:fb:74:44:22:b3:b8:e4 (ECDSA)
MD5:dd:e6:17:5e:50:0e:b7:d1:6e:a5:e2:5c:88:9b:06:c7 (ED25519)
Changelog
Regular maintenance of the SFTP servers occasionally includes upgrading to newer operating system versions or hardware. In these cases, the host fingerprints of the servers will change. Most SFTP software will react to a change in the host fingerprint by failing the connection with a warning. For example, OpenSSH prints the following:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:elWbA2dwlXKLd4q43SfFbSp1Dw2FnbLFufsJ4ITn5TU.
This section records the dates at which SFTP fingerprints changed so that users can verify updates.
2018-12-12: The host fingerprints for customer-support-syd.puppetlabs.net were changed.
2018-12-07: The host fingerprints for customer-support.puppetlabs.net were changed.
Uploading the output of the support script directly from PE in 2018.1.8 and later 2018.1 or 2019.0.3 and later
If you’re using PE 2018.1.8 and later 2018.1, or 2019.0.3 and later, you can upload the output of the support script directly to our primary SFTP server.
Before you begin, you need: an SFTP account created by us and an active Support ticket number.
To run the script and upload the output using default credentials, run the following:
puppet enterprise support --v3 --ticket <TICKET NUMBER> --upload
For example,
puppet enterprise support --v3 --ticket 12345 --upload
To specify your own credentials, run the following:
puppet enterprise support --v3 --ticket <TICKET NUMBER> --upload --upload-user <USER NAME> --upload-key <PATH SSH PUBLIC KEY>
For example:
puppet enterprise support --v3 --ticket 12345 --upload --upload-user [email protected] --upload-key ~/.ssh/id_rsa_pe.pub
View ArticleForthelatest version of PE:
Download the latest master versions here.
Download the latest agent versions here.
Download the latest client tools package here.
For previous releases:
Download all previous versions of PE here: https://puppet.com/misc/pe-files/previous-releases.
View ArticleAfter I updated the OS for some of my agents, some of the agents are not responding in the console. When I run the agent on the node manually, it shows up for about an hour, and goes back to not responding.
Error messages and logs
On the agent, in /var/log/messages, an error similar to the following:
Oct 1 15:21:36 agentnode puppet-agent[11680]: Unable to fetch my node definition, but the agent run will continue:
Oct 1 15:21:36 agentnode puppet-agent[11680]: (/File[/opt/puppetlabs/puppet/cache/facts.d]) Failed to generate additional resources using 'eval_generate': getaddrinfo: Name or service not known
Oct 1 15:21:40 agentnode puppet-agent[11680]: Could not retrieve catalog from remote server: getaddrinfo: Name or service not known
Version and installation information
PE version: Any
OS: Red Hat 7.5, CentOS 7.5
Solution
You have this issue because during the update to RHEL 7.5, glibc was updated, but puppet service was not restarted to reload the updated libraries.
Learn more from Red Hat’s release notes.
To fix the issue, restart puppet service on the affected agent nodes. You can either do this manually on the command line or by running a task in the console.
For PE 2017.2.x and earlier: You must use the manual solution. (Tasks were introduced in PE 2017.3.)
For PE 2017.3.x and later If you are fixing the issue for a small number of nodes, using the manual fix is the easier solution. If you are fixing the issue for a large number of nodes, the task-based solution is easier.
To restart the service manually
Restart puppet service on each affected agent node by running:
puppet resource service puppet ensure=stopped
puppet resource service puppet ensure=running
To restart the service using a task:
Restart puppet service on all Red Hat nodes by running the service::linux task in the console using the following steps.
In the console, in the Run section, click Task.
In the Task field, select the service::linux task.
Enter the following parameters:
action: restart
name: puppet
In the Inventory list, select PQL query.
Enter the following query:
query: `inventory[certname] { facts.os.family = "RedHat"and facts.os.release.major = "7" and facts.os.release.minor = "5" }`
Click Submit query and click Refresh to update the node results.
Click Run job.
View ArticleI upgraded to PE 2017.2.1. When I deploy code by running puppet-code deploy --all --wait, the command runs more slowly than it did before my upgrade, and it fails with a "timeout exceeded" error. How can I fix this?
Error messages and logs
When running puppet-code deploy with the --wait flag, code deployments fail with the error:
"status": "error"
}
}
}
]
},
"kind": "puppetlabs.code-manager/timeout-exceeded",
"msg": "The deploys job failed to sync before the sync timeout was exceeded."
}
Code Manager's $timeouts_sync variable defaults to 60 seconds, which is too low for Puppet Enterprise deployments that: use compile masters, where a large number of environments are deployed at the same time, or where large environments are deployed
This results in a timeout was exceeded error.
In PE 2017.2.1, deploying code using puppet-code deploy --all --wait synchronizes code to the master of masters and all compile masters, which causes the deployment to time out and fail when $timeouts_sync is too short. In addition, the r10k gem shipped with PE 2017.2.1 checks for module deprecation every time code is deployed to each environment, slowing code deployment.
Version and installation information
PE version: 2017.2.1
OS: RHEL-based
Installation type: Any
Solution
Fix this issue by upgrading to PE 2017.2.2.
If you're unable to upgrade, work around this issue by increasing the value oftimeouts_sync in Hiera and installing a newer version of the r10k gem.
As root on the master of masters (MoM):
Increase the value of timeouts_syncin Hiera. In your control repository, add the following parameter to your control repository hierarchy in hieradata/common.yaml:
puppet_enterprise::master::code_manager::timeouts_sync: <value>
Choose a value that allows enough time to deploy code on the MoM and all compile masters and that takes geographical distance into account. For example, in a Puppet Enterprise deployment with four compile masters in four different data centers, start with timeouts_sync set to 600, and increase it as needed.
Warning: Do not edit Code Manager's configuration file manually. Puppet manages this configuration file automatically and will undo any manual changes you make.
Find the version of the r10k gem that you're using. Run: /opt/puppetlabs/puppet/bin/gem list r10k
The output is the version of the r10k gem that you're running: r10k (2.5.4)
Uninstall the current version of the r10k gem and install a newer version of the r10k gem. Run:
# /opt/puppetlabs/puppet/bin/gem uninstall r10k; /opt/puppetlabs/puppet/bin/gem install r10k -v 2.5.5; /opt/puppetlabs/puppet/bin/gem list r10k
To complete the uninstallation process, enter Y when prompted:
Remove executables:
r10k
in addition to the gem? [Yn] Y
When this step is completed successfully you get output similar to:
Removing r10k
Successfully uninstalled r10k-2.5.4
Fetching: r10k-2.5.5.gem (100\%)
Successfully installed r10k-2.5.5
Parsing documentation for r10k-2.5.5
Installing ri documentation for r10k-2.5.5
Done installing documentation for r10k after 1 seconds
1 gem installed
*** LOCAL GEMS ***
r10k (2.5.5)
View ArticleIf you need information while troubleshooting, increase the log level for the Puppet agent service.
Version and installation information
PE version: 2015.x to 2019.0.x
OS: *nix, Windows
Solution
Increase the log level using a method that fits your issue.
Note: Agent logs include sensitive information, including domain names and custom facts. Review logs and remove sensitive information before sending them to us.
My issue occurs on every agent run
If your issue occurs during every agent run, increase the log level for a single manual agent run.
Add -d to the list of options for your agent run. If you output the logs to a file, you can attach them to a support ticket: puppet agent -td > PuppetDebug.log
My issue is intermittent
When the issue is intermittent, increase the log level for all Puppet runs. You can do this by editing either Puppet.conf or the agent service configuration.
Editing Puppet.conf:
Works for both *nix and Windows.
Increases the log level for all agents regardless of invocation method.
Adds debug level information to reports sent to the master.
Does not require any service restarts.
Editing the agent service configuration:
Allows you to target specific nodes.
Edit Puppet.conf
On Windows nodes, puppet.conf is located at \%PROGRAMDATA\%\PuppetLabs\puppet\etc (usually C:\ProgramData\PuppetLabs\puppet\etc)
On *nix nodes, puppet.conf is located at $confdir/puppet.conf (usually /etc/puppetlabs/puppet/puppet.conf)
On the target node, edit puppet.conf to add log_level = debug
Edit the agent service configuration
On Windows nodes
On the target node, restart the agent service using the following command: c:\>sc stop puppet && sc start puppet --debug --trace
On *nix nodes:
On the target node, edit the following file:
OS
Location
All *nix (except for Debian)
/etc/sysconfig/puppet
Debian
/etc/default/puppet
Add the following line to the end of the file:
PUPPET_EXTRA_OPTS=--log_level=debug
Restart the agent service:
`puppet resource service puppet ensure=stopped`
`puppet resource service puppet ensure=running`
Where are my logs?
The location of the logs depends on your OS.
On Windows platforms, they are available in Event Viewer.
Select Windows Logs > Application.
On *nix platforms, the agent service logs messages to syslog:
On RHEL-based OS: /var/log/messages
On Mac OS X: /var/log/system.log
On Solaris: /var/adm/messages
Troubleshooting for *nix platforms
When the log level is increased to debug in *nix, it might be suppressed by the syslog process. In your logs, there is a lower level log output than expected and a message similar to the following:
Sep 26 07:05:58 pe-201645-agent journal: Suppressed 2367 messages from /system.slice/puppet.service
Resolve the issue by redirecting the logs to an output file.
On the target node, edit the following file:
OS
Location
All *nix (except for Debian)
/etc/sysconfig/puppet
Debian
/etc/default/puppet
Add the--logdest option to the PUPPET_EXTRA_OPT line, for example:
PUPPET_EXTRA_OPTS=--logdest=/var/log/puppetlabs/agent.log --log_level=debug
Restart the agent service:
puppet resource service puppet ensure=stopped puppet resource service puppet ensure=running
View ArticleAn upgrade is available for your Puppet Enterprise infrastructure, and you want to verify that it is in a stable and well-provisioned state before proceeding.
Version and installation information
PE version: 2018.x and later
OS: Unix
Solution
Before, during, and after upgrading, check for details that can ensure whether different types of upgrades will go smoothly.
Verify that your infrastructure is ready for an upgrade by installing the puppetlabs-preupgrade_check module and running a Puppet Bolt plan. You can use the output from the plan in this article to proceed with your upgrade or to request assistance from Puppet Support.
Glossary
When this document refers to infrastructure nodes, it means nodes in the PE Infrastructure node group in the PE Console, which should include the primary master, compilers, and PE database nodes. This is distinct from agent nodes, which are the systems being managed by PE infrastructure nodes.
Z-release upgrades (minor update): Within a PE version series. For example, PE 2018.1.8 to PE 2018.1.9. If you are an experienced PE user, you can easily handle minor updates, especially in standard (monolithic) architectures.
Incremental X- or Y-release upgrades (major upgrades): From one PE version series to the next. For example, PE 2018.1 to PE 2018.2, or PE 2018 to PE 2019 (incremental, +1). These upgrades can be especially complex.
Long term support (LTS) upgrades: From one PE LTS release to the next. For example, PE 2016 LTS to PE 2018 LTS. These upgrades require you to audit and revise code and modules for compatibility.
Running the preupgrade_check plan
This plan requires the following:
Puppet Bolt
A user account on infrastructure nodes that can gain elevated (sudo) privileges without a password prompt
A valid SSH key for the workstation used to run this plan, added to the infrastructure nodes’ authorized_keys
The plan uses Bolt’s SSH transport and relies on targeting nodes that can gain elevated permissions without a password prompt. Bolt has additional options for authentication; for details, see its documentation.
Bolt assumes that the target nodes have been connected to via SSH before. If you run the preupgrade_check plan on a node that isn’t in your workstation’s known_hosts file, the plan might report unexpected errors. Before running the plan, confirm that you can SSH into the affected nodes and gain elevated permissions.
The tasks copy and run scripts to the nodes as root. You can run these scripts manually to gather data by copying and running the check_os.sh and check_time.sh scripts from the module’s tasks directory on infrastructure nodes. Note that this produces only debug mode JSON output; the logic to check the output for potential issues is in the plan, not the tasks.
On a workstation that can SSH to the PE master, install Puppet Bolt.
Note: We do not recommend running this plan from the console, nor do we recommend installing Bolt on the master.
Add mod 'puppetlabs/preupgrade_check' to your workstation’s Bolt Puppetfile.
Run bolt puppetfile install.
Prepare a list of Puppet Enterprise infrastructure nodes’ full hostnames in a text file (nodefile.txt), adding one entry per line. This list should be the same as your PE Infrastructure node group’s matched nodes.
Run the preupgrade_check plan from the command line: bolt plan run preupgrade_check --run-as root --targets @nodefile.txt
Example output:
If the check does not find any problems with the target nodes, it reports:
Plan completed successfully with no result
Otherwise, it reports either an error message or a JSON object with warnings about the node’s state.
If the plan reports any problems, run it in debug mode and report the result by opening a ticket with Puppet Support before proceeding with the upgrade. To run in debug mode, add the debug=true option:
bolt plan run preupgrade_check debug=true --run-as root --targets @nodefile.txt
This outputs details about the infrastructure nodes, which Support can use to identify issues and advise you.
Special considerations in customized infrastructures
If you’ve performed specialized configuration in the past, or aren’t sure, it’s worth checking whether default PE settings or configurations have been altered before upgrading.
Disable cached catalogs on infrastructure nodes: Cached catalogs can help nodes run when they don’t have access to a master, but can cause conflicts during an upgrade. By default, infrastructure nodes do not use cached catalogs.
Disable no-op on infrastructure nodes: Running Puppet in no-op mode makes it easier to approve specific changes, but no-op mode can prevent upgrades on infrastructure nodes. By default, infrastructure nodes do not enable no-op mode.
Ensure agent runs are successful on all infrastructure components: You can view this in the nodes belonging to the PE Infrastructure node group. If any infrastructure nodes aren’t successfully completing Puppet runs, the upgrade might fail.
If your infrastructure is air-gapped (no internet connection), download installation packages first: Download agent packages (see the agent installation documentation ) as well the PE installation tarball.
If you use a proxy, verify the configuration: If your master uses a proxy server to access the internet, prior to installation, specify pe_repo::http_proxy_host and pe_repo::http_proxy_port parameters in pe.conf, Hiera, or in the PE console in the pe_repo class of the PE Master node group.
Check and upgrade your modules
Upgrading Puppet Enterprise, especially across major versions, can introduce breaking changes to Puppet that affect not only your own code but also code from modules you’ve installed from other sources.
For modules you’ve written, test them before upgrading using the Puppet Development Kit (PDK), which lets you run module tests in simulated environments from your PE upgrade target version.
If you use modules from the Puppet Forge in your environments, confirm that the installed versions of those modules are compatible with the version of PE that you’re upgrading to, and if necessary update your Puppet code and upgrade those modules to be compatible before initiating the upgrade.
Note: We recommend using Puppetfiles to install, manage, and upgrade modules from the Puppet Forge. Doing so typically does not automatically upgrade modules, so the versions that existed when you first added them to your Puppetfile stay in place until you edit your Puppetfile to specify newer versions. If you do not use Puppetfiles, r10k, or Code Manager, or if you install Forge modules from an internal store or control repo branch, first determine how Forge modules are installed and maintained in your infrastructure, and then update them appropriately.
Examine your environments’ Puppetfiles for Forge modules. Note each Forge module’s name and the version installed, if one is specified.
Check each module’s Forge page for the current version, and click the Compatibility tab to determine whether the current version of that module is compatible with your version of PE. For modules supported by Puppet, which have the Supported tag on the Forge, we recommend upgrading to the newest version that lists compatibility with both your current PE version and the version you’re upgrading to before upgrading PE.
If a module is out of date, read the release notes for its current version for any potentially breaking changes to the module that would require you to update your Puppet code.
If necessary, make any required or recommended adjustments to your Puppet code by the documentation in the modules’ newer versions.
Edit your Puppetfile in each environment to specify an updated version of each outdated module. If you have test or development environments, update those Puppetfiles first, continue with the next steps to catch any issues with the upgraded modules, then return to this step to update your production environment. Updating modules individually can also help isolate whether updating a specific module causes problems with your code.
Caution: The Puppetfile does NOT include Forge module dependency resolution. You must make sure that you install and update every module needed for all of your specified modules to run. This can include updating some modules in tandem.
Deploy your code to update modules based on the Puppetfile. For example, to deploy all environments, run puppet code deploy --all.
Perform a Puppet run to confirm that the agent can still compile and apply catalogs with the upgraded modules, and run with the -debug and -logdest flags to capture log output to help diagnose any problems: puppet agent --test --debug --logdest /tmp/agent-debug.log.
If no version of a module exists that is compatible with both your current version of PE and the version you’re upgrading to, use the newest version of the module that lists compatibility with your current version of PE, then repeat these steps after upgrading to install a version of the module compatible with your new PE version.
If no version of a module that you use lists compatibility with the version of PE that you’re upgrading to, contact the module’s maintainer. If a module is no longer maintained and you must use an alternative module, you must also update any Puppet code that invokes the older module.
If upgrading a module causes an environment’s Puppet code to fail prior to upgrading PE, you can also downgrade the module version in the environment’s Puppetfile.
Check for deprecations and removals
Major components might be deprecated or removed between major versions, requiring additional preparation before updating. For example, MCollective was deprecated in PE 2018.1 and removed in PE 2019.0. For details, see the PE release notes for all major versions between your current and target version.
View ArticlePuppet Enterprise (PE) connects to external Lightweight Directory Access Protocol (LDAP) directory services through PE's Role-Based Access Control (RBAC) service, allowing you to use existing users and user groups from your external directory service in PE. If you are not able to import LDAP users into PE or have other RBAC issues, collect the following information and attach it to a support ticket:
Your PE external directory settings.
An LDIF (LDAP Data Interchange Format) file from an impacted user.
Note: Before sending us this information, please review and redact it as needed. PE external directory settingsinclude your LDAP or AD password, usernames, hostnames, and information about your cryptography.LDIF user information might include passwords, password hashes, IP addresses, hostnames, and user names.
Version and installation information
PE version:3.8.x to2017.2.x
Solution
Get your PE external directory settings
As root, log into the console node (the master in a monolithic installation).
Pull the external directory settings from the directory service (/ds) API endpoint:
curl https://$(puppet config print certname):4433/rbac-api/v1/ds --cert $(puppet config print hostcert) --key $(puppet config print hostprivkey) --cacert $(puppet config print cacert) > /tmp/ds.json
Important security note: The resulting /tmp/ds.json file includes the plaintext password for the external directory lookup user. Edit the file to remove it.
Get an LDIF
Get an impacted user's LDIF using simple authentication:
ldapsearch -LLL -x -h <ACTIVE DIRECTORY HOSTNAME> -D "<EXTERNAL DIRECTORY LOOKUP USER>" -W -b "<USER RELATIVE DISTINGUISHED NAME>,<BASE DISTINGUISHED NAME>" "(ObjectClass=user)" > /tmp/ldapsearch.out
For example,
ldapsearch -LLL -x -h pe-381-agent-win2008.example.com -D "cn=query,cn=Users,dc=example,dc=com" -W -b "cn=Users,dc=example,dc=com" "(ObjectClass=user)" > /tmp/ldapsearch.out
For additional troubleshooting information on RBAC and external directories, read Basic RBAC troubleshooting in PE 3.7.x, 3.8.x, and 2015.2.x.
View ArticleNodes can be removed from different components in my deployment using puppet node purge <node name>, node-ttl, and node-purge-ttl. I'm not sure what each of them does or how I should use them.
Version and installation information
PE version:2015.x and later
OS: N/A
Installation type: All
Solution
Use to manage deployment
Use to manage PuppetDB data
puppet node purge combines:
Command or setting
puppet node deactivate
puppet node clean
node-ttl
node-purge-ttl
What it does
Marks nodes deactivated
Removes deactivated nodes from PE
Marks nodes expired
Removes expired nodes from PuppetDB
When it happens
Immediately
After a specified interval
Permanent?
Yes
No: nodes will be added back to PuppetDB after next run.
What is puppet node purge?
Use puppet node purge to completely remove nodes from your deployment. The puppet node purge command is a wrapper around two separate commands:
puppet node deactivate marks the node as deactivated in PuppetDB. Although the node remains present in PuppetDB, it no longer shows up in the console under Nodes > Inventory. The node's report data is still available in the console.
puppet node clean deletes the Puppet master's information cache for the node, including certs and cached catalogs. It creates a new CRL (certificate revocation list) and applies it to the master.
View nodes deactivated by puppet node purge by querying against the deactivated field:
curl -X GET http://localhost:8080/pdb/query/v4 --data-urlencode 'query=nodes { deactivated is not null }' | python -m json.tool
What are node-ttl and node-purge-ttl?
Use node-ttl and node-purge-ttl to maintain your deployment. Both settings remove outdated node information from PuppetDB and free disk space. However, node removal is not permanent. When a node is expired and purged with node-ttl and node-purge-ttl, the next Puppet run on that node adds it back to PuppetDB.
node-ttl marks the node as expired in PuppetDB after a specified amount of time. Although the node remains present in PuppetDB, it no longer shows up in the console under Nodes > Inventory. The node's report data is still available in the console.
node-purge-ttl affects only deactivated or expired nodes in PuppetDB. It automatically purges nodes that have been deactivated or expired for a specified amount of time from PuppetDB. All facts, catalogs, and reports for the relevant nodes are deleted.
Understanding when nodes are expired and deleted from PuppetDB with node-ttl and node-purge-ttl
Neither node-ttl nor node-purge-ttl affects nodes immediately.
node-ttl is triggered when nodes have had no activity for a specified amount of time (no new catalogs, facts, or reports). If you set node-ttl = 7d, you might expect your nodes to expire 7 days after the last new catalog, fact or report. However, nodes won't be evaluated and expired until the first garbage collection (GC) 7 days after the last new catalog, fact or report.
node-purge-ttl is triggered a specified amount of time after the node is deactivated or expired. If you set node-purge-ttl=1h, you might expect your node to be purged from PuppetDB one hour after your node expires or deactivates. However, it won't be evaluated and purged until the first GC after the hour has elapsed.
How often GC happens and when node-ttl and node-purge-ttl are effective is governed by the gc-interval setting. For example, if gc-interval is set to 60 minutes (the default) and node-ttl is set to 5 minutes, the node exists in PuppetDB as an active node and is visible in the console for up to 60 minutes. During a GC run, node-ttl is evaluated. The node auto-expires if the node-ttl time interval (in this case 5 minutes) has passed since the last new activity on the node.
When an interval is specified for node-purge-ttl, it is applied during the GC run. For example:
gc-interval = 5m
node-ttl = 4m
node-purge-ttl = 1h
Note: d stands for days, m stands for minutes, s stands for seconds. All example PuppetDB logs are located on the PuppetDB node (the master in a standard deployment) at /var/log/puppetlabs/puppetdb/puppetdb.log.
2016-09-30 14:17:13,299 INFO [p.p.c.services] Starting sweep of stale nodes (threshold: 4 minutes)
2016-09-30 14:17:13,306 INFO [p.p.c.services] Auto-expired node nagios-client
2016-09-30 14:17:13,306 INFO [p.p.c.services] Auto-expired node nagios-server
2016-09-30 14:17:13,307 INFO [p.p.c.services] Finished sweep of stale nodes (threshold: 4 minutes)
2016-09-30 14:17:13,308 INFO [p.p.c.services] Starting purge deactivated and expired nodes (threshold: 1 hour)
2016-09-30 14:17:13,309 INFO [p.p.c.services] Finished purge deactivated and expired nodes (threshold: 1 hour)
2016-09-30 14:17:13,309 INFO [p.p.c.services] Starting sweep of stale reports (threshold: 14 days)
2016-09-30 14:17:13,312 INFO [p.p.c.services] Finished sweep of stale reports (threshold: 14 days)
2016-09-30 14:17:13,312 INFO [p.p.c.services] Starting database garbage collection
2016-09-30 14:17:13,325 INFO [p.p.c.services] Finished database garbage collection
Using the default values:
node-ttl = 7d
node-purge-ttl = 0s
report-ttl = 14d
Note: report-ttl automatically deletes reports that are older than the specified amount of time.
Setting node-purge-ttl with a value of 0s is equivalent to leaving the value unset. The GC doesn't purge nodes.
gc-interval = 5m
node-ttl = 4m
node-purge-ttl = 0s
2016-09-30 14:27:06,272 INFO [p.p.c.services] Starting sweep of stale nodes (threshold: 4 minutes)
2016-09-30 14:27:06,291 INFO [p.p.c.services] Auto-expired node nagios-master
2016-09-30 14:27:06,294 INFO [p.p.c.services] Finished sweep of stale nodes (threshold: 4 minutes)
2016-09-30 14:27:06,294 INFO [p.p.c.services] Starting sweep of stale reports (threshold: 14 days)
2016-09-30 14:27:06,299 INFO [p.p.c.services] Finished sweep of stale reports (threshold: 14 days)
2016-09-30 14:27:06,300 INFO [p.p.c.services] Starting database garbage collection
2016-09-30 14:27:06,314 INFO [p.p.c.services] Finished database garbage collection
Leaving node-ttl and node-purge-ttl unset is equivalent to setting the value to 0 seconds. The GC doesn't expire or purge nodes.
gc-interval = 5m
node-ttl - unset
node-purge-ttl - unset
2016-09-30 15:13:01,324 INFO [p.p.c.services] Starting sweep of stale nodes (threshold: 7 days)
2016-09-30 15:13:01,329 INFO [p.p.c.services] Finished sweep of stale nodes (threshold: 7 days)
2016-09-30 15:13:01,329 INFO [p.p.c.services] Starting sweep of stale reports (threshold: 14 days)
2016-09-30 15:13:01,339 INFO [p.p.c.services] Finished sweep of stale reports (threshold: 14 days)
2016-09-30 15:13:01,340 INFO [p.p.c.services] Starting database garbage collection
2016-09-30 15:13:01,356 INFO [p.p.c.services] Finished database garbage collection
In the case where both values are set to 0s, exactly the same thing happens; neither interval is applied, and nodes remain in PuppetDB and the console. For example:
gc-interval = 5m
node-ttl = 0s
node-purge-ttl = 0s
2016-09-30 14:34:44,294 INFO [p.p.c.services] Starting sweep of stale reports (threshold: 14 days)
2016-09-30 14:34:44,308 INFO [p.p.c.services] Finished sweep of stale reports (threshold: 14 days)
2016-09-30 14:34:44,308 INFO [p.p.c.services] Starting database garbage collection
2016-09-30 14:34:44,323 INFO [p.p.c.services] Finished database garbage collection
How do I configure node-ttl and node-purge-ttl?
View configuration settings for node-ttl and node-purge-ttl in the PuppetDB configuration file. By default, this config file is located at /etc/puppetlabs/puppetdb/conf.d on your PuppetDB node, under the [database] section. Both settings must be edited in the console in the PE PuppetDBgroup in the puppet_enterprise::puppetdb class.
By default, the settings maintaining your node information in PuppetDB are:
node-ttl = 7d
node-purge-ttl = 0s
report-ttl = 14d
Giving any of these three settings a value of 0s is equivalent to leaving the value unset and your nodes won't be expired or purged.
How do I view nodes impacted by node-ttl and node-purge-ttl?
View expired nodes in PuppetDB by using PQL to query the PuppetDB API root endpoint using the expired field:
In PE 2017.3 and later:
curl -X GET http://localhost:8080/pdb/query/v4 --data-urlencode 'query=nodes
{ node_state = "inactive" }' | python -m json.tool
In PE 2017.2 and earlier:
curl -X GET http://localhost:8080/pdb/query/v4 --data-urlencode 'query=nodes { expired is not null }' | python -m json.tool
View expired and deactivated nodes by combining both queries using or.
curl -X GET http://localhost:8080/pdb/query/v4 --data-urlencode 'query=nodes { expired is not null or deactivated is not null }' | python -m json.tool
View ArticleImportant: Puppet Enterprise 2018.1 is the last release to support Marionette Collective, also known as MCollective. While PE 2018.1 remains supported, Puppet will continue to address security issues for MCollective. Feature development has been discontinued. Future releases of PE will not include MCollective. For more information, see the Puppet Enterprise support lifecycle.
To prepare for these changes, migrate your MCollective work to Puppet orchestrator to automate tasks and create consistent, repeatable administrative processes. Use orchestrator to automate your workflows and take advantage of its integration with Puppet Enterprise console and commands, APIs, role-based access control, and event tracking.
My facts aren’t syncing to MCollective. I used mco puppet runonce -F <fact_name> = <value> to trigger a change using facts, but MCollective isn’t finding any nodes. I need to sync my facts and make sure MCollective stays up to date as I add new facts. How can I do that?
Version and installation information
PE version: 2016.2.x
OS: Windows Server 2008 R2
Solution
There are two ways to update facts for MCollective in Windows:
Do a one-time sync of facts
Run the following .bat file: C:\ProgramData\PuppetLabs\mcollective\etc\refresh-mcollective-metadata.bat
Increase the frequency of the scheduled task updating facts
By default, the Windows scheduled task that syncs facts used with MCollective only runs once a day. Increase how frequently the scheduled task runs by managing a new scheduled task with PE.
The default manifest that manages the frequency of the scheduled task is puppet_enterprise::mcollective::server::facter, which is located in /opt/puppetlabs/puppet/modules/puppet_enterprise/manifests/mcollective/server/facter.pp
You shouldn’t change the frequency of the task in this manifest since it will be overwritten, but it provides an example of a scheduled task that refreshes MCollective facts, which you can use to solve your issue.
include puppet_enterprise::params
$mco_etc = $puppet_enterprise::params::mco_etc
scheduled_task { 'custom-pe-mcollective-metadata':
ensure => 'present',
command => "${mco_etc}/refresh-mcollective-metadata.bat",
enabled => true,
trigger => {
'every' => '1',
'schedule' => 'daily',
'start_time' => '13:00'
},
}
Increase the frequency of the task by adding the same scheduled task to your own class and editing the trigger attribute to add the minutes_interval key under the start time key. For example, to run the task every 30 minutes:
include puppet_enterprise::params
$mco_etc = $puppet_enterprise::params::mco_etc
scheduled_task { 'custom-pe-mcollective-metadata':
ensure => 'present',
command => "${mco_etc}/refresh-mcollective-metadata.bat",
enabled => true,
trigger => {
'every' => '1',
'schedule' => 'daily',
'start_time' => '13:00'
'minutes_interval' => '30'
},
}
Read more about how to use the trigger attribute with scheduled tasks, including examples, on the Puppet docs site.
View ArticleIn the vRO client, when you run the Add a Puppet Enterprise Master workflow, it fails with a runtime exception.
Error messages and logs
In the vRO client, when you run the Add a Puppet Enterprise Master workflow, you get either of the following messages:
Unable to create a vCO endpoint of type 'Puppet'. Reason: 'Failed to add Master. Exception: (RuntimeException: Failed to get Facter fact) (Workflow:Add a Puppet Enterprise Master / Add a Puppet Enterprise Master (item1)#7)'
Unable to create a vCO endpoint of type 'Puppet'. Reason: 'Failed to add Master. Exception: (JSchException: Auth cancel) (Workflow:Add a Puppet Enterprise Master / Add a Puppet Enterprise Master (item1)#7)'
Version and installation information
PE version: 2016.4.x to 2018.1.x
Solution
The issue occurs because requirements for the plug-in are not met. The user for the workflow cannot complete the facter command needed to get the current PE version and add the master. Classification for the plug-in must also be correct for the workflow to succeed.
Learn more about vRO configuration.
To add the master, the vro-plugin-user user must:
Be able to SSH into the master from the vRO client.
Be either using root or be able to run Puppet commands without entering a password for sudo.
Check that the user meets the requirements: SSH into the master with the vro-plugin-user credentials and run the following command: sudo /opt/puppetlabs/bin/facter -p pe_server_version When the command runs successfully, the output is your version of PE, for example, 2017.3.2.
If you are unable to SSH into the master with the vro-plugin-user credentials, confirm that classification is correct.
Ensure that the vro_plugin_sshd and vro_plugin_user classes are classified for the master. If you see any errors, fix them.
In the console, navigate to Classification > All Nodes. Under All Nodes, if the Autosign and vRO Plugin User and sshd config node group is not present, install the Puppet vRO Starter Content.
In the Rules tab, ensure the master is pinned to the Autosign and vRO Plugin User and sshd config node group.
In the Classes tab, ensure that the vro_plugin_sshd, vro_plugin_user, and autosign_example classes are present.
Run puppet agent -t on the master.
SSH into the master as the vro-plugin-user and run the facter command again:
sudo /opt/puppetlabs/bin/facter -p pe_server_version
Troubleshooting OS issues
If you are not able to run the command successfully, use the following troubleshooting sections to fix OS issues.
Unable to SSH into the master
If you are unable to SSH in to the master after completing the steps above, check the following items on the master.
Ensure that the SSH configuration allows password authenticated logins. In /etc/ssh/sshd_config check that the following lines are present:
PermitRootLogin yes
PasswordAuthentication yes
ChallengeResponseAuthentication no
Ensure that the vro_plugin_user is allowed to SSH into the master. Open /etc/ssh/sshd_config and check for configuration issues. Commonly, the AllowUsers setting is enabled but does not contain the vro-plugin-user.
Ensure that the /etc/ssh/sshd_config has been read by restarting the ssh service.
If you have an issue using SSH after fixing these items above, work with your OS vendor to troubleshoot the issue.
Unable to run sudo commands without entering a password
If you are unable to run sudo commands with the vro-plugin-user without entering a password after completing the steps above, check the following items on the master.
Ensure the file /etc/sudoers.d/vro-plugin-user exists and contains entries with NOPASSWD in them.
For example:
vro-plugin-user ALL = (root) NOPASSWD: /opt/puppetlabs/bin/facter -p puppetversion
If it does not, check the Puppet catalog for items that modify the sudoers file.
In /etc/sudoers, ensure that /etc/sudoers.d/ is included.
Ensure that no other configuration options prevent the vro-plugin-user from using sudo without a password. Check sudo access for vro-plugin-user by logging in and running sudo -l.
For example:
$ sudo -l
Matching Defaults entries for vro-plugin-user on this host:
!visiblepw, always_set_home, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME
LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER
LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin, !requiretty
User vro-plugin-user may run the following commands on this host:
(root) NOPASSWD: /opt/puppetlabs/bin/puppet node purge *
(root) NOPASSWD: !/opt/puppetlabs/bin/puppet node purge *[[\:blank\:]]*
(root) NOPASSWD: /opt/puppetlabs/bin/puppet config print *
(root) NOPASSWD: !/opt/puppetlabs/bin/puppet config print *[[\:blank\:]]*
(root) NOPASSWD: /opt/puppetlabs/bin/facter -p puppetversion
(root) NOPASSWD: /opt/puppetlabs/bin/facter -p pe_server_version
(root) NOPASSWD: /opt/puppetlabs/bin/puppet agent -t
(root) NOPASSWD: /opt/puppetlabs/bin/puppet agent --test --color\=false --detailed-exitcodes
(root) NOPASSWD: /bin/kill -HUP *
(root) NOPASSWD: !/bin/kill -HUP *[[\:blank\:]]*
(root) NOPASSWD: !/opt/puppetlabs/bin/puppet node purge pe-201734-master.puppetdebug.vlan
(root) NOPASSWD: !/opt/puppetlabs/bin/puppet node purge pe-internal-mcollective-servers
(root) NOPASSWD: !/opt/puppetlabs/bin/puppet node purge pe-internal-peadmin-mcollective-client
(root) NOPASSWD: /opt/puppetlabs/bin/puppet resource service puppet ensure\=stopped
(root) NOPASSWD: /opt/puppetlabs/bin/puppet resource service puppet ensure\=running enable\=true
(root) NOPASSWD: /bin/cp /etc/puppetlabs/puppet/ssl/ca/ca_crl.pem /etc/puppetlabs/puppet/ssl/crl.pem
If the sudo issue persists after checking these items, work with your OS vendor to troubleshoot the issue.
View ArticleThis article provides instructions on enabling and viewing the PuppetDB performance dashboard. Using the console is helpful when adjusting the Java heap size because JVM heap size is one of the metrics displayed. The performance console shows performance information and metrics, including memory use, queue depth, command processing metrics, duplication rate, and query stats. It displays min/max/median of each metric over a configurable duration, as well as an animated SVG "sparkline" (a simple line chart that shows general variation).
Note: Links to our documentation are for Puppet Enterprise 2019.0.x. Use the version selector on our docs site to make sure you've got the right version of our docs for your deployment.
Version and installation information
PE version: 3.3.2 to 2019.0.x
Solution
Note: If you enable cleartext HTTP with the following steps, you must configure your firewall to protect unverified access to PuppetDB. Unencrypted HTTP is the only way to view the performance dashboard, since PuppetDB uses host verification for SSL.
For PE 2015.2.x to 2019.0.x
To enable and view the performance dashboard
Log into the PE console.
Select Nodes > Classification.
Select the PE PuppetDB node group.
Click the Classes tab.
Locate the puppet_enterprise::profile::puppetdb class, and from the parameters drop-down list, select listen_address, and update the value to 0.0.0.0.
Note: setting this configuration to 0.0.0.0 allows all IP addresses access to the PuppetDB performance console.
Click Add parameter, and click the Commit changes button.
Log into your PuppetDB node as root. (On a monolithic installation, PuppetDB is located on the same node as the Puppet master.)
Run puppet agent -t.
View the performance dashboard by navigating to http://<PuppetDB IP address>:<port>, replacing <PuppetDB IP address> with the IP address and <port> with the port of your PuppetDB node (8080 by default).
To disable the performance dashboard
Log into the PE console.
Select Nodes > Classification.
Select the PE PuppetDB node group.
Click the Classes tab.
Locate the puppet_enterprise::profile::puppetdb class. Next to listen_address click Remove.
Click the Commit changes button.
For PE 3.8.x and PE 3.7.x:
To enable and view the performance dashboard
Log into the PE console.
Select Classification in the main navigation bar at the top of the page..
Select the PE PuppetDB node group.
Click the Classes tab.
Locate the puppet_enterprise::profile::puppetdb class, and from the parameters drop-down list, select listen_address, and update the value to 0.0.0.0.
Click Add parameter, and click the Commit changes button.
Log into your PuppetDB node as root. (On a monolithic installation, your PuppetDB node is the Puppet master.)
Run puppet agent -t.
Navigate to http://<PuppetDB IP address:8080>/dashboard/index.html, replacing <PuppetDB IP address:8080> with the IP address and port of your PuppetDB node.
For PE 3.3.2:
To enable and view the performance dashboard
Log into the console.
Select Nodes in the main navigation bar at the top of the page.
Select your PuppetDB node. (On a monolithic installation, your PuppetDB node is the Puppet master.)
Click Edit.
Under Classes, locate the pe_puppetdb::pe class, and click Edit parameters.
Scroll down to the the listen_address parameter and update the value with the name of your node.
Click Done.
Log into your PuppetDB node as root.
Run puppet agent -t.
Navigate to http://<PuppetDB IP address:8080>/dashboard/index.html, replacing <PuppetDB IP address:8080> with the IP address and port of your PuppetDB node.
View Article