3 ways to set up a PMM Server

Linode VPS

Read full article.

This method involves self-configuring everything you need on a virtual machine in the Linode cloud, with all the complexities and peculiarities that come with it.

ProsCons
– You can manage the entire technology stack; – Customization of the VPS; – Automated backups and restoring from backups is on the Linode side;– You will manage the entire technology stack; – Updates of PMM and OS packages must be done manually; – Security issues on your side; – PMM server status monitoring too;

AWS Marketplace

Read full article.

Installing through the Marketplace is a much easier way than manually installing on a virtual machine, there is no need to maintain a solution and think about backups, AWS takes care of most of the tasks. It is only difficult to set up a VPC for the first time.

ProsCons
– Easy to install in a few clicks; – Full support includes updates and backups is on the AWS side; – Customization of the VPS; – Flexible configuration of security rules using AWS VPC; – PMM server status monitoring;– High price; – Need to prepare VPC and Security Groups for providing access to PMM server from internet; – AWS VPC rules need to be supported on your side;

HostedPMM

Read full article.

A fully automatic solution for deploying a PMM server, the HPMM team takes care of all the tasks of maintaining virtual machines and software. You only need to register and connect PMM agents.

ProsCons
– Low price; – Easy to install in a few clicks and use; – Full support includes updates and backups is on the HPMM side; – PMM server status monitoring; – Security issues on the HPMM side;– There are only two options for available virtual machines, although this may be a plus for someone;

Top 4 PMM dashboards to start with PMM

Once I first logged in to PMM web interface (which is, by the way, standard Grafana with a predefined dashboards set), I was a bit puzzled and did not know where to look first.

So I’ve decided to write this article to help newcomers to start working with PMM. Probably some of this will be interesting for the experts, too.

Lets start from something really interesting. My personal favorite is “PMM Query Analytics”, the first in “PMM dashboards” list:

This dashboard shows the total load (number of executions X server time spent for each query) for the digested query:

What does it mean?

Example: you have a lot of queries like “SELECT customerName from customers WHERE customerId = <actual ID>”, each with unique customerId.

PMM Query Analytics will threat all these queries as a single item and calculate for you the impact this query puts on your server.

When you see a simple query appeared in your top 10 queries by load, this usually means you’ve forgot to create an index, or you don’t have enough buffers, or billion other reasons – but you have the motive to take some action.

You can click on the digested query to see more details on the query, you also can see query examples with the arguments (not just a digested query) and even check query execution plan similar to what MySQL EXPLAIN does.

Note that you can sort queries in this dashboard by Load, Query time (averaged for the digested queries) or Query count.

The next dashboards worth taking a look is “Nodes overview” under “System (Node)” section:

It’s not a rocket science but it shows you basic OS metrics and how they change in time:

This can be extremely useful to investigate incidents, e.g. check if you have enough RAM on the server to withstand peak loads.

Note that this is basically a standard Grafana interface so you can pick the time zone and period to view in the top left corner:

You can also pick the autorefresh interval or disable it (can be handy when viewing longer intervals to save traffic and browser RAM).

The next basic but very useful dashboard “MySQL instances overview” under “MySQL” section(it can be very well “PostgreSQL instances overview” if you use Postgres, or “MongoDB instances overview”- you got the idea):

Same as previous, it shows you basic MySQL performance data graphs and how they change in time.

Note that many graphs are collapsed by default, you can expand them by clicking on the appropriate collapsed section title (e.g. “InnoDB I/O details” in the screenshot above).

Lets get to the last dashboard in this article: PMM Advisor checks.

It can be accessed by clicking on ‘Advisor checks’ in the left menu:

You will see something like this:

Click on ‘Critical’ or ‘Warning’ counts to view the hint about potential configuration issues. Can be handy even for experienced users.

This small walkthrough was only about dashboards but PMM can do more. E.g. we did not have a single word about alerting configuration, retention, access control, custom dashboards and so on.

[SOLVED] PMM Server installation error – x509: certificate signed by unknown authority.

x509: certificate signed by unknown authority.

This error can occur when you try to connect pmm-agent to a pmm-server over a secure channel, but your server only has a self-signed certificate.

pmm-admin config --server-url=https://admin:PASSWORD@host:443 XXX.XXX.XXX.XXX generic my_pmm_client
Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Failed to register pmm-agent on PMM Server: Post "https://host:443/v1/management/Node/Register": x509: certificate signed by unknown authority.

So first of all you need to understand if it really needs to be secured, if it doesn’t you can just add the –server-insecure-tls key. For example, if the pmm server and pmm clients are on the same isolated network, it is not necessary to build a fully secure connection.

pmm-admin config --server-insecure-tls --server-url=https://admin:PASSWORD@host:443 XXX.XXX.XXX.XXX generic my_pmm_client

In other cases, a secure connection is preferred.

You need to create a DNS entry for the pmm server and make sure the client resolves it correctly.

Then you need to put the certificate, ca-chain and key into the docker container with the pmm server.

docker cp -L /tmp/cert.pem pmm-server:/srv/nginx/certificate.crt
docker cp -L /tmp/cert.ca pmm-server:/srv/nginx/ca-certs.pem
docker cp -L /tmp/cert.key pmm-server:/srv/nginx/certificate.key

After updating the certificate, you need to restart the container

docker restart pmm-server

And now agent should connect to server correctly.

pmm-admin config --server-url=https://admin:PASSWORD@host:443 XXX.XXX.XXX.XXX generic my_pmm_client
Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.
Checking local pmm-agent status...
pmm-agent is running.

PS: You can use a Let’s Encrypt certificate for this.

If you still receive this kind of error

We found a bug with official pmm-client image. If certificates at server side is completely correct, but you still receive this error.

pmm-client  | INFO[2022-11-30T08:53:38.245+00:00] Loading configuration file /etc/pmm-agent.yaml.  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.246+00:00] Using /usr/local/percona/pmm2/exporters/node_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.247+00:00] Using /usr/local/percona/pmm2/exporters/mysqld_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.247+00:00] Using /usr/local/percona/pmm2/exporters/mongodb_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.247+00:00] Using /usr/local/percona/pmm2/exporters/postgres_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.247+00:00] Using /usr/local/percona/pmm2/exporters/proxysql_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.247+00:00] Using /usr/local/percona/pmm2/exporters/rds_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.247+00:00] Using /usr/local/percona/pmm2/exporters/azure_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.248+00:00] Using /usr/local/percona/pmm2/exporters/vmagent  component=setup
pmm-client  | Checking local pmm-agent status...
pmm-client  | pmm-agent is not running.
pmm-client  | Registering pmm-agent on PMM Server...
pmm-client  | Failed to register pmm-agent on PMM Server: Post "https://host:443/v1/management/Node/Register": x509: certificate signed by unknown authority.
pmm-client exited with code 1

Probably it’s related to problem with certificates at pmm-client image . We reported this bug to Percona, so the problem will be resolved when they update official image.

So the workaround at this moment to avoid this issue with docker oriented install instruction is only to enable SEVER_INSECURE_TLS option:

PMM_SERVER=X.X.X.X:443
docker run \
--rm \
--name pmm-client \
-e PMM_AGENT_SERVER_ADDRESS=${PMM_SERVER} \
-e PMM_AGENT_SERVER_USERNAME=admin \
-e PMM_AGENT_SERVER_PASSWORD=admin \
-e PMM_AGENT_SERVER_INSECURE_TLS=1 \
-e PMM_AGENT_SETUP=1 \
-e PMM_AGENT_CONFIG_FILE=config/pmm-agent.yaml \
--volumes-from pmm-client-data \
percona/pmm-client:2

MySQL & PHP on backend best practices

When building a backend in PHP using MySQL (and not only), it is important to follow some simple tips in order for the application to work correctly and efficiently.

Few small tips

Don’t mix DB queries and View

Because mixing the code for interacting with the database (business logic) and the views on the backend side subsequently leads to the fact that it will be very difficult for you to accompany and maintain the application (technical debt).

Modern frameworks, such as Laravel, Yii, CakePHP or Symfony components (etc.), suggest using the Model-View-Controller (MVC) pattern to separate the logic of working with database and view, and the connection between these abstractions is carried out in the intermediate controller.

Don’t trust data sent by users

Don’t use data entered by users directly in the database queries without validation and sanitisation, I recommend reading what SQL injections are. In short, if you don’t clean up user-entered data, at some point the user can submit a specially formed query and gain access to your database or currupt it.

Embedded PHP functions like filter_var() and filter_input() may help you sanitize and validate input data.

Don’t Repeat Yourself (DRY)

This is a programming pattern that encourages you not to repeat yourself when developing software, but instead try to reuse existing code. For example, in your project of several similar requests, you can create a function/method that will generate a request to the database from a template, and pass the value that can be changed through a variable.

Instead of:

$user1 = "SELECT id, name FROM users WHERE id = 1 LIMIT 1";
$user2 = "SELECT id, name FROM users WHERE id = 2 LIMIT 1";

Use something like that:

function getUserById(int $id) {
  if ($id <= 0) {
		throw new \OutOfRangeException("ID must be greater than zero");
  }
  return "SELECT id, name FROM users WHERE id = $id LIMIT 1";
}

$user1 = getUserById(1);
$user2 = getUserById(2);

Use prepared statement

Instead of forming a query using the mixing of variables and SQL directives, it is most practical to use prepared statements, such a feature will complicate the code a little, but allow you to worry a little less about the data submitted by users, because escaping in this case is done automatically.

Using ext-mysqli:

// Prepate the statement
$stmt = $mysqli->prepare('INSERT INTO products(label, order, price) VALUES (?, ?, ?)');

// Bind values to statemnt
$stmt->bind_param('sid', 'new product', 9, 123.45);
// "sid" mean 
//  - "s" is type string (first value)
//  - "i" is type integer (second)
//  - "d" is type float/non-integer (third)

// Execute query
$stmt->execute();

Using PDO:

// Prepare the statement
$stmt = $pdo->prepare("INSERT INTO products(label, order, price) VALUES (:label, :order, :price)");

// Bind values to statement (PDO will detect type automatically)
$stmt->bindValue(':label', 'new product');
$stmt->bindValue(':order', 9);
$stmt->bindValue(':price', 123.45);

// Execute query
$stmt->execute();

Don’t keep unnecessary database connections open

In order to work with a database from PHP, we first need to connect to DB, then create a state, in which we can describe what query needs to be executed, and then return the data. Developers sometimes forget to close the connection after the work is completed, of course PHP and libraries can perform such tasks, but it’s safer to close everything manually just in case.

For example:

$stmt = $mysqli->prepare(...);
// Other code here
$stmt->execute();
// Processing response from DB

// Closing connection
$mysqli->close();

Almost the same is for PDO extension.

Try to avoid SELECT * queries

Such queries cause a large load on the DBMS, especially if you need to return a large set of data and often you don’t need data from all the columns of the table for subsequent work.

Instead of:

SELECT * FROM users;

Better to use something like that:

SELECT id, name FROM users;

Use LIMIT/OFFSET pair for pagination

Instead of displaying all the data from the table at once, it is more convenient to use pagination. You may use it on the database side with help of LIMIT and OFFSET directives.

LIMIT – allows you to limit the number of results to the specified number of rows.

OFFSET – indents the top by the specified number of rows.

For example:

SELECT id, price, created_at FROM orders LIMIT 10 OFFSET 10;

This ^ mean: return 10 rows from orders table with offset 10 rows from begin.

What to do if pagination is not applicable

But let’s say LIMIT/OFFSET doesn’t suit your case and you need to get all the rows from the table (or a very large amount of them).

In such cases, it is better to use an iterator, it allows you to process the data received from the database on the server side gradually, at one point in time the backend receives a small piece of information and processes it. This is much more efficient in terms of resources consumed than receiving a large amount of data for processing.

For example using ext-mysql:

$result = $mysqli->query($query);
while ($row = $result->fetch_array(MYSQLI_ASSOC)){
    // your code here
}

Using PDO:

$stmt = $pdo->query($query);
while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
    // your code here
}

Sliding window – another alternative for pagination

Sometimes there are situations when the possibilities of LIMIT/OFFSET is not enough, but it is still necessary to select data in blocks. Suppose we have a Cron script (which executed every hour) to fetch all data from the database, but ignore the data that was selected during previous runs of the script.

Here is small example of script

// Get last ID
$lastIdFile = '/tmp/lastId';
$lastId = 0;
if (file_exists($lastIdFile)) {
	$lastId = file_get_content($lastIdFile);
}

// Open connection to database
$pdo = new PDO('mysql:host=localhost;dbname=test', $user, $pass);

// Select all rows with ID larger than $lastId
$stmt = $pdo->prepare("SELECT id, name, price FROM products WHERE id > :id ORDER BY id");
$stmt->bindValue(':id', $lastId);
$stmt->execute();

// Fetch all data
$results = $stmt->fetchAll(PDO::FETCH_ASSOC);
// ... some work with response ...

// Close connection
$pdo->close();

// Detect last ID
$lastItem = end($results);
$lastId = $lastItem['id'];

// Save last ID
file_put_content($lastIdFile, $lastId);

In this example, at the first start, we request all rows whose ID is greater than 0, after which we read the value of the last ID (let’s say it is 100) and save it, for example, on the file system.

After one hour 100 more records have been added during this time. Our script reads the value of the last ID from the file and selects all records with a value greater than 100. And so on.

To solve such problems, the Sliding Window principle is used.

Always keep passwords encrypted

If an attacker somehow gained access to your database, you must prevent the possibility of reading the passwords used by users, with this you may help embedded password encryption and password validation functions embedded to PHP.

Instead of:

$nickname = 'user1';
$password = 'secret_password';
$query = "INSERT INTO users (nickname, password) VALUES (:nickname, :password);"

Use something like this:

$nickname = 'user1';
$password = password_hash('secret_password');
$query = "INSERT INTO users (nickname, password) VALUES (:nickname, :password);"

Then for verification:

$password = 'secret_password';
$hash = '$2y$10$9trAUhBsQr.rTZuRz9T...7piFrj4weTgckSWvtwKqAVXnlRqq26y'; // secret_password
$isValidPassword = password_verify($password, $hash); // true
// or
$isValidPassword = password_verify('wrong_password', $hash); // false

Read more about password_hash() and password_verify().

Try to avoid subqueries, use JOIN instead

There are few ways to get intertwined data from multiple tables in a single query, this is usually done either with a subqueries or with a JOIN.

Subqueries are much more readable (human friendly) but usually require more server resources, while those implemented with the JOIN directive are less readable but have better performance (computer friendly).

For example you have two tables: accounts and users, and you need to select all users connected with an account, balance of which is larger than 100 USD.

Using subqueries:

SELECT id, name
FROM users
WHERE account_id IN (
  SELECT id
	FROM accounts
	WHERE balance >= 100
);

Using JOINs:

SELECT u.id, u.name
FROM users AS u
LEFT JOIN accounts AS a ON (u.account_id = a.id)
WHERE a.balance >= 100;

Use Transactions for safe from Race Conditions

Let’s analyze the situation when you have an example table and two scripts (A and B) that work with this table. Each of these scripts performs small updates of individual rows in the same column, each script first makes a request to the database, after which some analysis of the received data, after which it makes a change.

Script A

SELECT * FROM example WHERE id=1; // id:1, name:demo
UPDATE example SET name='test';

Script B

SELECT * FROM example WHERE id=1; // id:1, name:demo
UPDATE example SET name='zzz';

In some case, it may turn out that both scripts took data on the same row and are trying to update, usually only the data that was updated last (let’s say it was script B) is saved, because the data entered earlier (script A) is overwritten.

To prevent such situations transactions will help us. They are performed in three stages: the beginning of the transaction, the logic of working with data, the commit (that is, the completion).

START TRANSACTION;
SELECT * FROM example WHERE id=1; // id:1, name:demo
UPDATE example SET name='zzz';
COMMIT;

If an error occurred during the execution of this block before commit (for example, because another script changed the data), we can rollback and the database will not save the changes described inside the transaction.

ROLLBACK;

Using this technique will help you avoid a lot of unpleasant situations associated with the Race Conditions problem.

Conclusion

Of course, these are not all possible recommendations, only a basic set, it is easy to follow these recommendations, and if they are used, the quality and readability of the code will be much higher, and the code itself will be safer.

PostgreSQL & NodeJS on backend best practices

Don’t mix DB queries and View

Because mixing the code for interacting with the database (business logic) and the views on the backend side subsequently leads to the fact that it will be very difficult for you to accompany and maintain the application (technical debt).

Modern frameworks, such as ExpressJS, RectJS, NestJS (etc.), suggest using the Model-View-Controller (MVC) pattern to separate the logic of working with database and view, and the connection between these abstractions is carried out in the intermediate controller.

Don’t trust data sent by users

Don’t use data entered by users directly in the database queries without validation and sanitisation, I recommend reading what SQL injections are. In short, if you don’t clean up user-entered data, at some point the user can submit a specially formed query and gain access to your database or corrupt it.

Some JS frameworks have a few embedded mechanisms for cleaning up, more details about validation and sanitisation in this article.

Don’t Repeat Yourself (DRY)

This is a programming pattern that encourages you not to repeat yourself when developing software, but instead try to reuse existing code. For example, in your project of several similar requests, you can create a function/method that will generate a request to the database from a template, and pass the value that can be changed through a variable.

Instead of:

let user1 = "SELECT id, name FROM users WHERE id = 1 LIMIT 1";
let user2 = "SELECT id, name FROM users WHERE id = 2 LIMIT 1";

Use something like that:

function getUserById(id) {
  if (typeof id != 'number') {
    throw "Not a number";
  }
  if (id <= 0) {
    throw "ID must be greater than zero";
  }
  return "SELECT id, name FROM users WHERE id = " + id + " LIMIT 1";
}


let user1 = getUserById(1);
let user2 = getUserById(2);

Use prepared statement

Instead of forming a query using the mixing of variables and SQL directives, it is most practical to use prepared statements, such a feature will complicate the code a little, but allow you to worry a little less about the data submitted by users, because escaping in this case is done automatically.

Using node-postgres extension:

const query = {
  text: 'INSERT INTO users(name, email) VALUES($1, $2)',
  values: ['brianc', 'brian.m.carlson@gmail.com'],
}

client
  .query(query)
  .then(res => console.log(res.rows[0]))
  .catch(e => console.error(e.stack))

Don’t keep unnecessary database connections open

In order to work with a database from NodeJS, we first need to connect to DB, then create a state, in which we can describe what query needs to be executed, and then return the data. Developers sometimes forget to close the connection after the work is completed, of course NodeJS and libraries can perform such tasks, but it’s safer to close everything manually just in case.

Using node-postgres extension:

const { Client } = require('pg')
const client = new Client()
await client.connect()
 
const res = await client.query('SELECT $1::text as message', ['Hello world!'])
console.log(res.rows[0].message) // Hello world!
await client.end()

Try to avoid SELECT * queries

Such queries cause a large load on the DBMS, especially if you need to return a large set of data and often you don’t need data from all the columns of the table for subsequent work.

Instead of:

SELECT * FROM users;

Better to use something like that:

SELECT id, name FROM users;

Use LIMIT/OFFSET pair for pagination

Instead of displaying all the data from the table at once, it is more convenient to use pagination. You may use it on the database side with help of LIMIT and OFFSET directives.

LIMIT – allows you to limit the number of results to the specified number of rows.

OFFSET – indents the top by the specified number of rows.

For example:

SELECT id, price, created_at FROM orders LIMIT 10 OFFSET 10;

This ^ mean: return 10 rows from orders table with offset 10 rows from begin.

Sliding window – another alternative for pagination

Sometimes there are situations when the possibilities of LIMIT/OFFSET is not enough, but it is still necessary to select data in blocks. Suppose we have a Cron script (which executed every hour) to fetch all data from the database, but ignore the data that was selected during previous runs of the script.

Here is small example of script:

const { Client } = require('pg');
const fs = require('fs');

// Get last ID
let lastId = 0;
try {
  lastId = fs.readFileSync('/tmp/lastId', 'utf8');
} catch (err) {
  console.error(err);
}

// Open connection to database
const client = new Client();
await client.connect();

// Select all rows with ID larger than $lastId
const results = await client.query('SELECT id, name, price FROM products WHERE id > ? ORDER BY id', [lastId]);
// ... some work with response ...

// Close connection
await client.end();

// Detect last ID
lastId = results[results.length - 1].id;

// Save last ID
fs.writeFileSync('/tmp/lastId', lastId, 'utf8')

In this example, at the first start, we request all rows whose ID is greater than 0, after which we read the value of the last ID (let’s say it is 100) and save it, for example, on the file system.

After one hour 100 more records have been added during this time. Our script reads the value of the last ID from the file and selects all records with a value greater than 100. And so on.

To solve such problems, the Sliding Window principle is used.

Always keep passwords encrypted

If an attacker somehow gained access to your database, you must prevent the possibility of reading the passwords used by users, with this you may help embedded password encryption and password validation functions embedded to PHP.

Instead of:

let nickname = 'user1';
let password = 'secret_password';
let query = "INSERT INTO users (nickname, password) VALUES (?, ?);"

Use something like this:

const bcrypt = require("bcrypt");

let nickname = 'user1';
let password = await bcrypt.hash('secret_password', 10);
let query = "INSERT INTO users (nickname, password) VALUES (?, ?);"

Then for verification:

const bcrypt = require("bcrypt");

let password = 'secret_password';
let hash = '$2y$10$9trAUhBsQr.rTZuRz9T...7piFrj4weTgckSWvtwKqAVXnlRqq26y'; // secret_password

let isValidPassword = await bcrypt.compare(password, hash); // true
// or
let isValidPassword = await bcrypt.compare('wrong_password', hash); // false

Read more about bcrypt library.

Try to avoid subqueries, use JOIN instead

There are few ways to get intertwined data from multiple tables in a single query, this is usually done either with a subqueries or with a JOIN.

Subqueries are much more readable (human friendly) but usually require more server resources, while those implemented with the JOIN directive are less readable but have better performance (computer friendly).

For example you have two tables: accounts and users, and you need to select all users connected with an account, balance of which is larger than 100 USD.

Using subqueries:

SELECT id, name
FROM users
WHERE account_id IN (
  SELECT id
	FROM accounts
	WHERE balance >= 100
);

Using JOINs:

SELECT u.id, u.name
FROM users AS u
LEFT JOIN accounts AS a ON (u.account_id = a.id)
WHERE a.balance >= 100;

Use Transactions for safe from Race Conditions

Let’s analyze the situation when you have an example table and two scripts (A and B) that work with this table. Each of these scripts performs small updates of individual rows in the same column, each script first makes a request to the database, after which some analysis of the received data, after which it makes a change.

Script A

SELECT * FROM example WHERE id=1; // id:1, name:demo
UPDATE example SET name='test';

Script B

SELECT * FROM example WHERE id=1; // id:1, name:demo
UPDATE example SET name='zzz';

In some case, it may turn out that both scripts took data on the same row and are trying to update, usually only the data that was updated last (let’s say it was script B) is saved, because the data entered earlier (script A) is overwritten.

To prevent such situations transactions will help us. They are performed in three stages: the beginning of the transaction, the logic of working with data, the commit (that is, the completion).

START TRANSACTION;
SELECT * FROM example WHERE id=1; // id:1, name:demo
UPDATE example SET name='zzz';
COMMIT;

If an error occurred during the execution of this block before commit (for example, because another script changed the data), we can rollback and the database will not save the changes described inside the transaction.

ROLLBACK;

Using this technique will help you avoid a lot of unpleasant situations associated with the Race Conditions problem.

Conclusion

Of course, these are not all possible recommendations, only a basic set, it is easy to follow these recommendations, and if they are used, the quality and readability of the code will be much higher, and the code itself will be safer.

[SOLVED] PMM Server installation error – Failed to get PMM Server parameters from local pmm-agent: pmm-agent is not connected to PMM Server.

Failed to get PMM Server parameters from local pmm-agent: pmm-agent is not connected to PMM Server.

pmm-admin add mysql --query-source=perfschema --username=pmm --password=PASSWORD --service-name=MYSQL_NODE --host=127.0.0.1 --port=3306
Failed to get PMM Server parameters from local pmm-agent: pmm-agent is not connected to PMM Server.

That can happens if you try to add some database to monitoring via pmm, but you didn’t register agent on any pmm-server. So you just need to register your agent. It can be done by following command.

pmm-admin config --server-insecure-tls --server-url=https://admin:PMM_ADMIN_PASSWORD@XXX.XXX.XXX.XXX:443 XXX.XXX.XXX.XXX generic NODE_NAME

Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.
Checking local pmm-agent status...
pmm-agent is running.

Now you can try to add your database to pmm-server again.

pmm-admin add mysql --query-source=perfschema --username=pmm --password=PASSWORD --service-name=MYSQL_NODE --host=127.0.0.1 --port=3306

MySQL Service added.
Service ID  : /service_id/1014a471-e6bb-4fe6-960c-12610a9050fd
Service name: MYSQL_NODE

Table statistics collection enabled (the limit is 1000, the actual table count is 103).

[SOLVED] PMM Server installation error – Connection check failed: pq: |Peer|Ident|Password authentication failed for user “pmm”.

Connection check failed: pq: |Peer|Ident|Password authentication failed for user “pmm”.

If you see similar errors when trying to add a PostgreSQL database to pmm-admin

Most likely you have a problem authorizing the pmm agent with your database. First you need to make sure that the user “pmm” really exists.

postgres=# \du
List of roles
Role name |                   Attributes                   | Member of
-----------+------------------------------------------------+-----------
pmm       | Superuser                                      | {}
postgres  | Superuser, Create role, Create DB, Replication | {}

If there is no such user, it should be created.

postgres=# CREATE USER pmm WITH SUPERUSER ENCRYPTED PASSWORD 'password';

Or you just forget password – then you can change it with this query:

postgres=# ALTER USER pmm WITH PASSWORD 'new_password';

Now try to connect from this user localy.

psql postgres pmm -c "\conninfo”

Problem still persist?

Most likely, you need to add the ability for your PMM user to log in locally to the PostgreSQL instance. To do this, add the following line to pg_hba.conf:

	# TYPE  DATABASE        USER            ADDRESS                 METHOD
  local   all             pmm                                     md5

And reload configuration

su - postgres
    psql -c "select pg_reload_conf()"

If you don’t know where is your pg_hba.conf is placed, you can ask PostgreSQL about it:

postgres=# SHOW hba_file;
                hba_file
    ---------------------------------
     /var/lib/pgsql/data/pg_hba.conf

Try to test connection again.

psql postgres pmm -c "\conninfo"
Password for user pmm:
You are connected to database "postgres" as user "pmm" via socket in "/var/run/postgresql" at port "5432".

Now you are ready to add postgresql service to your PMM server. To add an instance, run this command in your console:

pmm-admin add postgresql \
--username=pmm \
--password=pass \
--server-url=https://admin:admin@X.X.X.X:443 \
--server-insecure-tls

Problem still persist?

Try to add one more line to pg_hba.conf

# TYPE  DATABASE        USER            ADDRESS                 METHOD
  host    all             pmm             127.0.0.1/32            md5

Reload configuration:

su - postgres
psql -c "select pg_reload_conf()"

And try to add agent:

pmm-admin add postgresql \
--username=pmm \
--password=pass \
--server-url=https://admin:admin@X.X.X.X:443 \
--server-insecure-tls

PostgreSQL Service added.
Service ID  : /service_id/8776b3b2-0c00-4155-a8bd-da6b7414f550
Service name: pmm-test-postgresql

You can also make sure that the metrics exporter really works:

pmm-admin list

Service type        Service name               Address and port        Service ID
PostgreSQL          pmm-test-postgresql        127.0.0.1:5432          /service_id/c7827144-22b0-48f4-937e-f79f0ecf4297

Agent type                           Status           Metrics Mode        Agent ID                                              Service ID                                              Port
pmm_agent                            Connected                            /agent_id/670ce7e3-dab6-4b81-b53e-aa01157f6041                                                                0
node_exporter                        Running          push                /agent_id/8df6ead8-d06c-4080-8ffa-bc5bf20f97c9                                                                42001
postgres_exporter                    Running          push                /agent_id/d0078319-e859-4a01-ad5f-5c0408cdede2        /service_id/c7827144-22b0-48f4-937e-f79f0ecf4297        42006
postgresql_pgstatements_agent        Waiting                              /agent_id/f6799da9-1bb4-48e3-a268-4b45507d47cd        /service_id/c7827144-22b0-48f4-937e-f79f0ecf4297        0
vmagent                              Running          push                /agent_id/26b38532-6532-49c5-97

[SOLVED] PMM Server installation error – Connection check failed: Error 1130: Host is not allowed to connect to this server.

Connection check failed: Error 1130: Host ‘XXX.XXX.XXX.XXX‘ is not allowed to connect to this server.

If you get this type of error, it means that you need to create right access for pmm user to database.

You can check current permissions for the pmm users by the SHOW GRANT queries.

SHOW GRANTS FOR 'pmm';

SHOW GRANTS FOR 'pmm'@'XXX.XXX.XXX.XXX';

The request should return a response like this:

+--------------------------------------------------------------------------------------------------------------------------------------------------+
| Grants for pmm@XXX.XXX.XXX.XXX                                                                                                                      |
+--------------------------------------------------------------------------------------------------------------------------------------------------+
| GRANT SELECT, RELOAD, PROCESS, REPLICATION CLIENT ON *.* TO 'pmm'@'XXX.XXX.XXX.XXX' IDENTIFIED BY PASSWORD 'XXX' |
+--------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

Probably in case of a problem, the request will return a similar error to this:

ERROR 1141 (42000): There is no such grant defined for user 'pmm' on host 'XXX.XXX.XXX.XXX'

So the fix in this situation would be to create a user with the right permissions:

On MySQL 8.0

CREATE USER 'pmm'@'XXX.XXX.XXX.XXX' IDENTIFIED BY 'PASSWORD' WITH MAX_USER_CONNECTIONS 10;
GRANT SELECT, PROCESS, REPLICATION CLIENT, RELOAD, BACKUP_ADMIN ON . TO 'pmm'@'XXX.XXX.XXX.XXX';

On MySQL 5.7

CREATE USER 'pmm'@'[XXX.XXX.XXX.XXX](http://xxx.xxx.xxx.xxx/)' IDENTIFIED BY 'PASSWORD' WITH MAX_USER_CONNECTIONS 10;
GRANT SELECT, PROCESS, REPLICATION CLIENT, RELOAD ON *.* TO 'pmm'@'[XXX.XXX.XXX.XXX](http://xxx.xxx.xxx.xxx/)';

And after creating the user, you need to flush the privileges

FLUSH PRIVILEGES;