3 ways to set up a PMM Server

Linode VPS

Read full article.

This method involves self-configuring everything you need on a virtual machine in the Linode cloud, with all the complexities and peculiarities that come with it.

ProsCons
– You can manage the entire technology stack; – Customization of the VPS; – Automated backups and restoring from backups is on the Linode side;– You will manage the entire technology stack; – Updates of PMM and OS packages must be done manually; – Security issues on your side; – PMM server status monitoring too;

AWS Marketplace

Read full article.

Installing through the Marketplace is a much easier way than manually installing on a virtual machine, there is no need to maintain a solution and think about backups, AWS takes care of most of the tasks. It is only difficult to set up a VPC for the first time.

ProsCons
– Easy to install in a few clicks; – Full support includes updates and backups is on the AWS side; – Customization of the VPS; – Flexible configuration of security rules using AWS VPC; – PMM server status monitoring;– High price; – Need to prepare VPC and Security Groups for providing access to PMM server from internet; – AWS VPC rules need to be supported on your side;

HostedPMM

Read full article.

A fully automatic solution for deploying a PMM server, the HPMM team takes care of all the tasks of maintaining virtual machines and software. You only need to register and connect PMM agents.

ProsCons
– Low price; – Easy to install in a few clicks and use; – Full support includes updates and backups is on the HPMM side; – PMM server status monitoring; – Security issues on the HPMM side;– There are only two options for available virtual machines, although this may be a plus for someone;

Top 4 PMM dashboards to start with PMM

Once I first logged in to PMM web interface (which is, by the way, standard Grafana with a predefined dashboards set), I was a bit puzzled and did not know where to look first.

So I’ve decided to write this article to help newcomers to start working with PMM. Probably some of this will be interesting for the experts, too.

Lets start from something really interesting. My personal favorite is “PMM Query Analytics”, the first in “PMM dashboards” list:

This dashboard shows the total load (number of executions X server time spent for each query) for the digested query:

What does it mean?

Example: you have a lot of queries like “SELECT customerName from customers WHERE customerId = <actual ID>”, each with unique customerId.

PMM Query Analytics will threat all these queries as a single item and calculate for you the impact this query puts on your server.

When you see a simple query appeared in your top 10 queries by load, this usually means you’ve forgot to create an index, or you don’t have enough buffers, or billion other reasons – but you have the motive to take some action.

You can click on the digested query to see more details on the query, you also can see query examples with the arguments (not just a digested query) and even check query execution plan similar to what MySQL EXPLAIN does.

Note that you can sort queries in this dashboard by Load, Query time (averaged for the digested queries) or Query count.

The next dashboards worth taking a look is “Nodes overview” under “System (Node)” section:

It’s not a rocket science but it shows you basic OS metrics and how they change in time:

This can be extremely useful to investigate incidents, e.g. check if you have enough RAM on the server to withstand peak loads.

Note that this is basically a standard Grafana interface so you can pick the time zone and period to view in the top left corner:

You can also pick the autorefresh interval or disable it (can be handy when viewing longer intervals to save traffic and browser RAM).

The next basic but very useful dashboard “MySQL instances overview” under “MySQL” section(it can be very well “PostgreSQL instances overview” if you use Postgres, or “MongoDB instances overview”- you got the idea):

Same as previous, it shows you basic MySQL performance data graphs and how they change in time.

Note that many graphs are collapsed by default, you can expand them by clicking on the appropriate collapsed section title (e.g. “InnoDB I/O details” in the screenshot above).

Lets get to the last dashboard in this article: PMM Advisor checks.

It can be accessed by clicking on ‘Advisor checks’ in the left menu:

You will see something like this:

Click on ‘Critical’ or ‘Warning’ counts to view the hint about potential configuration issues. Can be handy even for experienced users.

This small walkthrough was only about dashboards but PMM can do more. E.g. we did not have a single word about alerting configuration, retention, access control, custom dashboards and so on.

[SOLVED] PMM Server installation error – x509: certificate signed by unknown authority.

x509: certificate signed by unknown authority.

This error can occur when you try to connect pmm-agent to a pmm-server over a secure channel, but your server only has a self-signed certificate.

pmm-admin config --server-url=https://admin:PASSWORD@host:443 XXX.XXX.XXX.XXX generic my_pmm_client
Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Failed to register pmm-agent on PMM Server: Post "https://host:443/v1/management/Node/Register": x509: certificate signed by unknown authority.

So first of all you need to understand if it really needs to be secured, if it doesn’t you can just add the –server-insecure-tls key. For example, if the pmm server and pmm clients are on the same isolated network, it is not necessary to build a fully secure connection.

pmm-admin config --server-insecure-tls --server-url=https://admin:PASSWORD@host:443 XXX.XXX.XXX.XXX generic my_pmm_client

In other cases, a secure connection is preferred.

You need to create a DNS entry for the pmm server and make sure the client resolves it correctly.

Then you need to put the certificate, ca-chain and key into the docker container with the pmm server.

docker cp -L /tmp/cert.pem pmm-server:/srv/nginx/certificate.crt
docker cp -L /tmp/cert.ca pmm-server:/srv/nginx/ca-certs.pem
docker cp -L /tmp/cert.key pmm-server:/srv/nginx/certificate.key

After updating the certificate, you need to restart the container

docker restart pmm-server

And now agent should connect to server correctly.

pmm-admin config --server-url=https://admin:PASSWORD@host:443 XXX.XXX.XXX.XXX generic my_pmm_client
Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.
Checking local pmm-agent status...
pmm-agent is running.

PS: You can use a Let’s Encrypt certificate for this.

If you still receive this kind of error

We found a bug with official pmm-client image. If certificates at server side is completely correct, but you still receive this error.

pmm-client  | INFO[2022-11-30T08:53:38.245+00:00] Loading configuration file /etc/pmm-agent.yaml.  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.246+00:00] Using /usr/local/percona/pmm2/exporters/node_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.247+00:00] Using /usr/local/percona/pmm2/exporters/mysqld_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.247+00:00] Using /usr/local/percona/pmm2/exporters/mongodb_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.247+00:00] Using /usr/local/percona/pmm2/exporters/postgres_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.247+00:00] Using /usr/local/percona/pmm2/exporters/proxysql_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.247+00:00] Using /usr/local/percona/pmm2/exporters/rds_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.247+00:00] Using /usr/local/percona/pmm2/exporters/azure_exporter  component=setup
pmm-client  | INFO[2022-11-30T08:53:38.248+00:00] Using /usr/local/percona/pmm2/exporters/vmagent  component=setup
pmm-client  | Checking local pmm-agent status...
pmm-client  | pmm-agent is not running.
pmm-client  | Registering pmm-agent on PMM Server...
pmm-client  | Failed to register pmm-agent on PMM Server: Post "https://host:443/v1/management/Node/Register": x509: certificate signed by unknown authority.
pmm-client exited with code 1

Probably it’s related to problem with certificates at pmm-client image . We reported this bug to Percona, so the problem will be resolved when they update official image.

So the workaround at this moment to avoid this issue with docker oriented install instruction is only to enable SEVER_INSECURE_TLS option:

PMM_SERVER=X.X.X.X:443
docker run \
--rm \
--name pmm-client \
-e PMM_AGENT_SERVER_ADDRESS=${PMM_SERVER} \
-e PMM_AGENT_SERVER_USERNAME=admin \
-e PMM_AGENT_SERVER_PASSWORD=admin \
-e PMM_AGENT_SERVER_INSECURE_TLS=1 \
-e PMM_AGENT_SETUP=1 \
-e PMM_AGENT_CONFIG_FILE=config/pmm-agent.yaml \
--volumes-from pmm-client-data \
percona/pmm-client:2

MySQL & PHP on backend best practices

When building a backend in PHP using MySQL (and not only), it is important to follow some simple tips in order for the application to work correctly and efficiently.

Few small tips

Don’t mix DB queries and View

Because mixing the code for interacting with the database (business logic) and the views on the backend side subsequently leads to the fact that it will be very difficult for you to accompany and maintain the application (technical debt).

Modern frameworks, such as Laravel, Yii, CakePHP or Symfony components (etc.), suggest using the Model-View-Controller (MVC) pattern to separate the logic of working with database and view, and the connection between these abstractions is carried out in the intermediate controller.

Don’t trust data sent by users

Don’t use data entered by users directly in the database queries without validation and sanitisation, I recommend reading what SQL injections are. In short, if you don’t clean up user-entered data, at some point the user can submit a specially formed query and gain access to your database or currupt it.

Embedded PHP functions like filter_var() and filter_input() may help you sanitize and validate input data.

Don’t Repeat Yourself (DRY)

This is a programming pattern that encourages you not to repeat yourself when developing software, but instead try to reuse existing code. For example, in your project of several similar requests, you can create a function/method that will generate a request to the database from a template, and pass the value that can be changed through a variable.

Instead of:

$user1 = "SELECT id, name FROM users WHERE id = 1 LIMIT 1";
$user2 = "SELECT id, name FROM users WHERE id = 2 LIMIT 1";

Use something like that:

function getUserById(int $id) {
  if ($id <= 0) {
		throw new \OutOfRangeException("ID must be greater than zero");
  }
  return "SELECT id, name FROM users WHERE id = $id LIMIT 1";
}

$user1 = getUserById(1);
$user2 = getUserById(2);

Use prepared statement

Instead of forming a query using the mixing of variables and SQL directives, it is most practical to use prepared statements, such a feature will complicate the code a little, but allow you to worry a little less about the data submitted by users, because escaping in this case is done automatically.

Using ext-mysqli:

// Prepate the statement
$stmt = $mysqli->prepare('INSERT INTO products(label, order, price) VALUES (?, ?, ?)');

// Bind values to statemnt
$stmt->bind_param('sid', 'new product', 9, 123.45);
// "sid" mean 
//  - "s" is type string (first value)
//  - "i" is type integer (second)
//  - "d" is type float/non-integer (third)

// Execute query
$stmt->execute();

Using PDO:

// Prepare the statement
$stmt = $pdo->prepare("INSERT INTO products(label, order, price) VALUES (:label, :order, :price)");

// Bind values to statement (PDO will detect type automatically)
$stmt->bindValue(':label', 'new product');
$stmt->bindValue(':order', 9);
$stmt->bindValue(':price', 123.45);

// Execute query
$stmt->execute();

Don’t keep unnecessary database connections open

In order to work with a database from PHP, we first need to connect to DB, then create a state, in which we can describe what query needs to be executed, and then return the data. Developers sometimes forget to close the connection after the work is completed, of course PHP and libraries can perform such tasks, but it’s safer to close everything manually just in case.

For example:

$stmt = $mysqli->prepare(...);
// Other code here
$stmt->execute();
// Processing response from DB

// Closing connection
$mysqli->close();

Almost the same is for PDO extension.

Try to avoid SELECT * queries

Such queries cause a large load on the DBMS, especially if you need to return a large set of data and often you don’t need data from all the columns of the table for subsequent work.

Instead of:

SELECT * FROM users;

Better to use something like that:

SELECT id, name FROM users;

Use LIMIT/OFFSET pair for pagination

Instead of displaying all the data from the table at once, it is more convenient to use pagination. You may use it on the database side with help of LIMIT and OFFSET directives.

LIMIT – allows you to limit the number of results to the specified number of rows.

OFFSET – indents the top by the specified number of rows.

For example:

SELECT id, price, created_at FROM orders LIMIT 10 OFFSET 10;

This ^ mean: return 10 rows from orders table with offset 10 rows from begin.

What to do if pagination is not applicable

But let’s say LIMIT/OFFSET doesn’t suit your case and you need to get all the rows from the table (or a very large amount of them).

In such cases, it is better to use an iterator, it allows you to process the data received from the database on the server side gradually, at one point in time the backend receives a small piece of information and processes it. This is much more efficient in terms of resources consumed than receiving a large amount of data for processing.

For example using ext-mysql:

$result = $mysqli->query($query);
while ($row = $result->fetch_array(MYSQLI_ASSOC)){
    // your code here
}

Using PDO:

$stmt = $pdo->query($query);
while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
    // your code here
}

Sliding window – another alternative for pagination

Sometimes there are situations when the possibilities of LIMIT/OFFSET is not enough, but it is still necessary to select data in blocks. Suppose we have a Cron script (which executed every hour) to fetch all data from the database, but ignore the data that was selected during previous runs of the script.

Here is small example of script

// Get last ID
$lastIdFile = '/tmp/lastId';
$lastId = 0;
if (file_exists($lastIdFile)) {
	$lastId = file_get_content($lastIdFile);
}

// Open connection to database
$pdo = new PDO('mysql:host=localhost;dbname=test', $user, $pass);

// Select all rows with ID larger than $lastId
$stmt = $pdo->prepare("SELECT id, name, price FROM products WHERE id > :id ORDER BY id");
$stmt->bindValue(':id', $lastId);
$stmt->execute();

// Fetch all data
$results = $stmt->fetchAll(PDO::FETCH_ASSOC);
// ... some work with response ...

// Close connection
$pdo->close();

// Detect last ID
$lastItem = end($results);
$lastId = $lastItem['id'];

// Save last ID
file_put_content($lastIdFile, $lastId);

In this example, at the first start, we request all rows whose ID is greater than 0, after which we read the value of the last ID (let’s say it is 100) and save it, for example, on the file system.

After one hour 100 more records have been added during this time. Our script reads the value of the last ID from the file and selects all records with a value greater than 100. And so on.

To solve such problems, the Sliding Window principle is used.

Always keep passwords encrypted

If an attacker somehow gained access to your database, you must prevent the possibility of reading the passwords used by users, with this you may help embedded password encryption and password validation functions embedded to PHP.

Instead of:

$nickname = 'user1';
$password = 'secret_password';
$query = "INSERT INTO users (nickname, password) VALUES (:nickname, :password);"

Use something like this:

$nickname = 'user1';
$password = password_hash('secret_password');
$query = "INSERT INTO users (nickname, password) VALUES (:nickname, :password);"

Then for verification:

$password = 'secret_password';
$hash = '$2y$10$9trAUhBsQr.rTZuRz9T...7piFrj4weTgckSWvtwKqAVXnlRqq26y'; // secret_password
$isValidPassword = password_verify($password, $hash); // true
// or
$isValidPassword = password_verify('wrong_password', $hash); // false

Read more about password_hash() and password_verify().

Try to avoid subqueries, use JOIN instead

There are few ways to get intertwined data from multiple tables in a single query, this is usually done either with a subqueries or with a JOIN.

Subqueries are much more readable (human friendly) but usually require more server resources, while those implemented with the JOIN directive are less readable but have better performance (computer friendly).

For example you have two tables: accounts and users, and you need to select all users connected with an account, balance of which is larger than 100 USD.

Using subqueries:

SELECT id, name
FROM users
WHERE account_id IN (
  SELECT id
	FROM accounts
	WHERE balance >= 100
);

Using JOINs:

SELECT u.id, u.name
FROM users AS u
LEFT JOIN accounts AS a ON (u.account_id = a.id)
WHERE a.balance >= 100;

Use Transactions for safe from Race Conditions

Let’s analyze the situation when you have an example table and two scripts (A and B) that work with this table. Each of these scripts performs small updates of individual rows in the same column, each script first makes a request to the database, after which some analysis of the received data, after which it makes a change.

Script A

SELECT * FROM example WHERE id=1; // id:1, name:demo
UPDATE example SET name='test';

Script B

SELECT * FROM example WHERE id=1; // id:1, name:demo
UPDATE example SET name='zzz';

In some case, it may turn out that both scripts took data on the same row and are trying to update, usually only the data that was updated last (let’s say it was script B) is saved, because the data entered earlier (script A) is overwritten.

To prevent such situations transactions will help us. They are performed in three stages: the beginning of the transaction, the logic of working with data, the commit (that is, the completion).

START TRANSACTION;
SELECT * FROM example WHERE id=1; // id:1, name:demo
UPDATE example SET name='zzz';
COMMIT;

If an error occurred during the execution of this block before commit (for example, because another script changed the data), we can rollback and the database will not save the changes described inside the transaction.

ROLLBACK;

Using this technique will help you avoid a lot of unpleasant situations associated with the Race Conditions problem.

Conclusion

Of course, these are not all possible recommendations, only a basic set, it is easy to follow these recommendations, and if they are used, the quality and readability of the code will be much higher, and the code itself will be safer.

PostgreSQL & NodeJS on backend best practices

Don’t mix DB queries and View

Because mixing the code for interacting with the database (business logic) and the views on the backend side subsequently leads to the fact that it will be very difficult for you to accompany and maintain the application (technical debt).

Modern frameworks, such as ExpressJS, RectJS, NestJS (etc.), suggest using the Model-View-Controller (MVC) pattern to separate the logic of working with database and view, and the connection between these abstractions is carried out in the intermediate controller.

Don’t trust data sent by users

Don’t use data entered by users directly in the database queries without validation and sanitisation, I recommend reading what SQL injections are. In short, if you don’t clean up user-entered data, at some point the user can submit a specially formed query and gain access to your database or corrupt it.

Some JS frameworks have a few embedded mechanisms for cleaning up, more details about validation and sanitisation in this article.

Don’t Repeat Yourself (DRY)

This is a programming pattern that encourages you not to repeat yourself when developing software, but instead try to reuse existing code. For example, in your project of several similar requests, you can create a function/method that will generate a request to the database from a template, and pass the value that can be changed through a variable.

Instead of:

let user1 = "SELECT id, name FROM users WHERE id = 1 LIMIT 1";
let user2 = "SELECT id, name FROM users WHERE id = 2 LIMIT 1";

Use something like that:

function getUserById(id) {
  if (typeof id != 'number') {
    throw "Not a number";
  }
  if (id <= 0) {
    throw "ID must be greater than zero";
  }
  return "SELECT id, name FROM users WHERE id = " + id + " LIMIT 1";
}


let user1 = getUserById(1);
let user2 = getUserById(2);

Use prepared statement

Instead of forming a query using the mixing of variables and SQL directives, it is most practical to use prepared statements, such a feature will complicate the code a little, but allow you to worry a little less about the data submitted by users, because escaping in this case is done automatically.

Using node-postgres extension:

const query = {
  text: 'INSERT INTO users(name, email) VALUES($1, $2)',
  values: ['brianc', 'brian.m.carlson@gmail.com'],
}

client
  .query(query)
  .then(res => console.log(res.rows[0]))
  .catch(e => console.error(e.stack))

Don’t keep unnecessary database connections open

In order to work with a database from NodeJS, we first need to connect to DB, then create a state, in which we can describe what query needs to be executed, and then return the data. Developers sometimes forget to close the connection after the work is completed, of course NodeJS and libraries can perform such tasks, but it’s safer to close everything manually just in case.

Using node-postgres extension:

const { Client } = require('pg')
const client = new Client()
await client.connect()
 
const res = await client.query('SELECT $1::text as message', ['Hello world!'])
console.log(res.rows[0].message) // Hello world!
await client.end()

Try to avoid SELECT * queries

Such queries cause a large load on the DBMS, especially if you need to return a large set of data and often you don’t need data from all the columns of the table for subsequent work.

Instead of:

SELECT * FROM users;

Better to use something like that:

SELECT id, name FROM users;

Use LIMIT/OFFSET pair for pagination

Instead of displaying all the data from the table at once, it is more convenient to use pagination. You may use it on the database side with help of LIMIT and OFFSET directives.

LIMIT – allows you to limit the number of results to the specified number of rows.

OFFSET – indents the top by the specified number of rows.

For example:

SELECT id, price, created_at FROM orders LIMIT 10 OFFSET 10;

This ^ mean: return 10 rows from orders table with offset 10 rows from begin.

Sliding window – another alternative for pagination

Sometimes there are situations when the possibilities of LIMIT/OFFSET is not enough, but it is still necessary to select data in blocks. Suppose we have a Cron script (which executed every hour) to fetch all data from the database, but ignore the data that was selected during previous runs of the script.

Here is small example of script:

const { Client } = require('pg');
const fs = require('fs');

// Get last ID
let lastId = 0;
try {
  lastId = fs.readFileSync('/tmp/lastId', 'utf8');
} catch (err) {
  console.error(err);
}

// Open connection to database
const client = new Client();
await client.connect();

// Select all rows with ID larger than $lastId
const results = await client.query('SELECT id, name, price FROM products WHERE id > ? ORDER BY id', [lastId]);
// ... some work with response ...

// Close connection
await client.end();

// Detect last ID
lastId = results[results.length - 1].id;

// Save last ID
fs.writeFileSync('/tmp/lastId', lastId, 'utf8')

In this example, at the first start, we request all rows whose ID is greater than 0, after which we read the value of the last ID (let’s say it is 100) and save it, for example, on the file system.

After one hour 100 more records have been added during this time. Our script reads the value of the last ID from the file and selects all records with a value greater than 100. And so on.

To solve such problems, the Sliding Window principle is used.

Always keep passwords encrypted

If an attacker somehow gained access to your database, you must prevent the possibility of reading the passwords used by users, with this you may help embedded password encryption and password validation functions embedded to PHP.

Instead of:

let nickname = 'user1';
let password = 'secret_password';
let query = "INSERT INTO users (nickname, password) VALUES (?, ?);"

Use something like this:

const bcrypt = require("bcrypt");

let nickname = 'user1';
let password = await bcrypt.hash('secret_password', 10);
let query = "INSERT INTO users (nickname, password) VALUES (?, ?);"

Then for verification:

const bcrypt = require("bcrypt");

let password = 'secret_password';
let hash = '$2y$10$9trAUhBsQr.rTZuRz9T...7piFrj4weTgckSWvtwKqAVXnlRqq26y'; // secret_password

let isValidPassword = await bcrypt.compare(password, hash); // true
// or
let isValidPassword = await bcrypt.compare('wrong_password', hash); // false

Read more about bcrypt library.

Try to avoid subqueries, use JOIN instead

There are few ways to get intertwined data from multiple tables in a single query, this is usually done either with a subqueries or with a JOIN.

Subqueries are much more readable (human friendly) but usually require more server resources, while those implemented with the JOIN directive are less readable but have better performance (computer friendly).

For example you have two tables: accounts and users, and you need to select all users connected with an account, balance of which is larger than 100 USD.

Using subqueries:

SELECT id, name
FROM users
WHERE account_id IN (
  SELECT id
	FROM accounts
	WHERE balance >= 100
);

Using JOINs:

SELECT u.id, u.name
FROM users AS u
LEFT JOIN accounts AS a ON (u.account_id = a.id)
WHERE a.balance >= 100;

Use Transactions for safe from Race Conditions

Let’s analyze the situation when you have an example table and two scripts (A and B) that work with this table. Each of these scripts performs small updates of individual rows in the same column, each script first makes a request to the database, after which some analysis of the received data, after which it makes a change.

Script A

SELECT * FROM example WHERE id=1; // id:1, name:demo
UPDATE example SET name='test';

Script B

SELECT * FROM example WHERE id=1; // id:1, name:demo
UPDATE example SET name='zzz';

In some case, it may turn out that both scripts took data on the same row and are trying to update, usually only the data that was updated last (let’s say it was script B) is saved, because the data entered earlier (script A) is overwritten.

To prevent such situations transactions will help us. They are performed in three stages: the beginning of the transaction, the logic of working with data, the commit (that is, the completion).

START TRANSACTION;
SELECT * FROM example WHERE id=1; // id:1, name:demo
UPDATE example SET name='zzz';
COMMIT;

If an error occurred during the execution of this block before commit (for example, because another script changed the data), we can rollback and the database will not save the changes described inside the transaction.

ROLLBACK;

Using this technique will help you avoid a lot of unpleasant situations associated with the Race Conditions problem.

Conclusion

Of course, these are not all possible recommendations, only a basic set, it is easy to follow these recommendations, and if they are used, the quality and readability of the code will be much higher, and the code itself will be safer.

[SOLVED] PMM Server installation error – Failed to get PMM Server parameters from local pmm-agent: pmm-agent is not connected to PMM Server.

Failed to get PMM Server parameters from local pmm-agent: pmm-agent is not connected to PMM Server.

pmm-admin add mysql --query-source=perfschema --username=pmm --password=PASSWORD --service-name=MYSQL_NODE --host=127.0.0.1 --port=3306
Failed to get PMM Server parameters from local pmm-agent: pmm-agent is not connected to PMM Server.

That can happens if you try to add some database to monitoring via pmm, but you didn’t register agent on any pmm-server. So you just need to register your agent. It can be done by following command.

pmm-admin config --server-insecure-tls --server-url=https://admin:PMM_ADMIN_PASSWORD@XXX.XXX.XXX.XXX:443 XXX.XXX.XXX.XXX generic NODE_NAME

Checking local pmm-agent status...
pmm-agent is running.
Registering pmm-agent on PMM Server...
Registered.
Configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml updated.
Reloading pmm-agent configuration...
Configuration reloaded.
Checking local pmm-agent status...
pmm-agent is running.

Now you can try to add your database to pmm-server again.

pmm-admin add mysql --query-source=perfschema --username=pmm --password=PASSWORD --service-name=MYSQL_NODE --host=127.0.0.1 --port=3306

MySQL Service added.
Service ID  : /service_id/1014a471-e6bb-4fe6-960c-12610a9050fd
Service name: MYSQL_NODE

Table statistics collection enabled (the limit is 1000, the actual table count is 103).

[SOLVED] PMM Server installation error – Connection check failed: pq: |Peer|Ident|Password authentication failed for user “pmm”.

Connection check failed: pq: |Peer|Ident|Password authentication failed for user “pmm”.

If you see similar errors when trying to add a PostgreSQL database to pmm-admin

Most likely you have a problem authorizing the pmm agent with your database. First you need to make sure that the user “pmm” really exists.

postgres=# \du
List of roles
Role name |                   Attributes                   | Member of
-----------+------------------------------------------------+-----------
pmm       | Superuser                                      | {}
postgres  | Superuser, Create role, Create DB, Replication | {}

If there is no such user, it should be created.

postgres=# CREATE USER pmm WITH SUPERUSER ENCRYPTED PASSWORD 'password';

Or you just forget password – then you can change it with this query:

postgres=# ALTER USER pmm WITH PASSWORD 'new_password';

Now try to connect from this user localy.

psql postgres pmm -c "\conninfo”

Problem still persist?

Most likely, you need to add the ability for your PMM user to log in locally to the PostgreSQL instance. To do this, add the following line to pg_hba.conf:

	# TYPE  DATABASE        USER            ADDRESS                 METHOD
  local   all             pmm                                     md5

And reload configuration

su - postgres
    psql -c "select pg_reload_conf()"

If you don’t know where is your pg_hba.conf is placed, you can ask PostgreSQL about it:

postgres=# SHOW hba_file;
                hba_file
    ---------------------------------
     /var/lib/pgsql/data/pg_hba.conf

Try to test connection again.

psql postgres pmm -c "\conninfo"
Password for user pmm:
You are connected to database "postgres" as user "pmm" via socket in "/var/run/postgresql" at port "5432".

Now you are ready to add postgresql service to your PMM server. To add an instance, run this command in your console:

pmm-admin add postgresql \
--username=pmm \
--password=pass \
--server-url=https://admin:admin@X.X.X.X:443 \
--server-insecure-tls

Problem still persist?

Try to add one more line to pg_hba.conf

# TYPE  DATABASE        USER            ADDRESS                 METHOD
  host    all             pmm             127.0.0.1/32            md5

Reload configuration:

su - postgres
psql -c "select pg_reload_conf()"

And try to add agent:

pmm-admin add postgresql \
--username=pmm \
--password=pass \
--server-url=https://admin:admin@X.X.X.X:443 \
--server-insecure-tls

PostgreSQL Service added.
Service ID  : /service_id/8776b3b2-0c00-4155-a8bd-da6b7414f550
Service name: pmm-test-postgresql

You can also make sure that the metrics exporter really works:

pmm-admin list

Service type        Service name               Address and port        Service ID
PostgreSQL          pmm-test-postgresql        127.0.0.1:5432          /service_id/c7827144-22b0-48f4-937e-f79f0ecf4297

Agent type                           Status           Metrics Mode        Agent ID                                              Service ID                                              Port
pmm_agent                            Connected                            /agent_id/670ce7e3-dab6-4b81-b53e-aa01157f6041                                                                0
node_exporter                        Running          push                /agent_id/8df6ead8-d06c-4080-8ffa-bc5bf20f97c9                                                                42001
postgres_exporter                    Running          push                /agent_id/d0078319-e859-4a01-ad5f-5c0408cdede2        /service_id/c7827144-22b0-48f4-937e-f79f0ecf4297        42006
postgresql_pgstatements_agent        Waiting                              /agent_id/f6799da9-1bb4-48e3-a268-4b45507d47cd        /service_id/c7827144-22b0-48f4-937e-f79f0ecf4297        0
vmagent                              Running          push                /agent_id/26b38532-6532-49c5-97

[SOLVED] PMM Server installation error – Connection check failed: Error 1130: Host is not allowed to connect to this server.

Connection check failed: Error 1130: Host ‘XXX.XXX.XXX.XXX‘ is not allowed to connect to this server.

If you get this type of error, it means that you need to create right access for pmm user to database.

You can check current permissions for the pmm users by the SHOW GRANT queries.

SHOW GRANTS FOR 'pmm';

SHOW GRANTS FOR 'pmm'@'XXX.XXX.XXX.XXX';

The request should return a response like this:

+--------------------------------------------------------------------------------------------------------------------------------------------------+
| Grants for pmm@XXX.XXX.XXX.XXX                                                                                                                      |
+--------------------------------------------------------------------------------------------------------------------------------------------------+
| GRANT SELECT, RELOAD, PROCESS, REPLICATION CLIENT ON *.* TO 'pmm'@'XXX.XXX.XXX.XXX' IDENTIFIED BY PASSWORD 'XXX' |
+--------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

Probably in case of a problem, the request will return a similar error to this:

ERROR 1141 (42000): There is no such grant defined for user 'pmm' on host 'XXX.XXX.XXX.XXX'

So the fix in this situation would be to create a user with the right permissions:

On MySQL 8.0

CREATE USER 'pmm'@'XXX.XXX.XXX.XXX' IDENTIFIED BY 'PASSWORD' WITH MAX_USER_CONNECTIONS 10;
GRANT SELECT, PROCESS, REPLICATION CLIENT, RELOAD, BACKUP_ADMIN ON . TO 'pmm'@'XXX.XXX.XXX.XXX';

On MySQL 5.7

CREATE USER 'pmm'@'[XXX.XXX.XXX.XXX](http://xxx.xxx.xxx.xxx/)' IDENTIFIED BY 'PASSWORD' WITH MAX_USER_CONNECTIONS 10;
GRANT SELECT, PROCESS, REPLICATION CLIENT, RELOAD ON *.* TO 'pmm'@'[XXX.XXX.XXX.XXX](http://xxx.xxx.xxx.xxx/)';

And after creating the user, you need to flush the privileges

FLUSH PRIVILEGES;

How to Set up MySQL Monitoring with PMM

How to Set Up MySQL with PMM

1. Prepare your environment.

Before you start setting up PostgreSQL to be connected to PMM monitoring, you need to have a PMM Client and register it on the PMM Server. Let’s do it.  You may set up the PMM Client in different ways. It can be either running in a Docker container or installed by a package manager or from sources. In this article, we will use the easiest way — we will install it from a repository using a package manager. Make sure you have curl, gnupb, gwet, and sudo installed on your server. First of all, you need to add Percona repo to your repositories list.  For Debian-based OS a. Configure repositories:

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb

dpkg -i percona-release_latest.generic_all.deb

 b. Install the hpmm-client package:

apt update
apt install -y pmm2-client

c. Check if it was installed correctly:

pmm-admin --version

For RedHat-based OS 1. Configure repositories:

yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm

b. Install the hpmm-client package:

yum install -y pmm2-client

c. Check if it was installed correctly:

pmm-admin --version

After you have installed the PMM Client, you should register it on your server. You need an HPMM server hostname or IP address and credentials for your connection. You may find this information on your hostedpmm private cabinet.    To register the PMM client on the server, just type:

pmm-admin config --server-insecure-tls --server-url=https://admin:<password>@XXX.XXX.XXX.XXX:443 YYY.YYY.YYY.YYY generic mynode

  where XXX.XXX.XXX.XXX — IP address or hostname of PMM server, YYY.YYY.YYY.YYY — IP address or hostname of PMM client, generic — client type and mynode — mane of client.

2. Configure MySQL

Create a database account for PMM. It is a good practice not to use a superuser to connect the PMM client to the monitored database instance:

CREATE USER 'pmm'@'localhost' IDENTIFIED BY 'pass' WITH MAX_USER_CONNECTIONS 10;

GRANT SELECT, PROCESS, REPLICATION CLIENT, RELOAD, BACKUP_ADMIN ON *.* TO 'pmm'@'localhost';

There are two possible sources of metrics — Slow the query log and Performance Schema.  Here are the benefits and drawbacks of Slow query log and Performance Schema metrics sources:

Benefits

Slow query log 1. More detail. 2. Lower resource impact (with query sampling feature in Percona Server for MySQL). Performance Schema 1. Faster parsing. 2. Enabled by default on newer versions of MySQL Drawbacks Slow query log 1. PMM Client must be on the same host as the database server or have access to a slow query log. 2. Log files grow and must be actively managed. Performance Schema 1. Less detail. Data source recommendations Database server MySQL Versions 5.1-5.5 Recommended source Slow query log Database server MySQL Versions 5.6+ Recommended source Performance Schema Database server MariaDB Versions 10.0+ Recommended source Performance Schema Database server Percona Server for MySQL Versions 5.7, 8.0 Recommended source Slow query log Database server Percona XtraDB Cluster Versions 5.6, 5.7, 8.0 Recommended source Slow query log As far as we use MySQL version higher than 5.6 and our PMM server is located on a remote host, we will configure the Performance Schema course.  Add the following lines to my.conf file:

performance_schema=ON
performance-schema-instrument='statement/%=ON'
performance-schema-consumer-statements-digest=ON
innodb_monitor_enable=all
query_response_time_stats=ON
userstat=ON

Restart the mysql and create the PMM service:

pmm-admin add mysql --query-source=perfschema --username=pmm --password=pass --service-name=MYSQL_NODE —host=XXX.XXX.XXX.XXX --port=3306

You may also add the service via the PMM Server user interface in Configuration -> PMM Inventory -> Add Instance menu section. Check if the service is registered successfully:

pmm-admin inventory list services --service-type=mysql

Success!

Congratulations, you have successfully set up MySQL monitoring with PPM. Now you have access to organized, secure, and robust database management systems. Monitoring and managing your data can help your business scale, grow, and flourish.

HostedPMM Services

Now that you have set up MySQL monitoring with PMM, you can focus on bigger issues, like growing your business. HostedPMM offers hosted PMM servers for monitoring, automation, troubleshooting, and convenience for all your database needs. You can get your own PMM server, connect your instances, and focus on your project with HostedPMM. Get started for free and say hello to simpler database solutions. We offer a quick setup, backups, scalability, tech support, security updates, high availability, and no vendor restrictions.

If you have any questions or need assistance with HostedPMM, contact us at support@hostedpmm.com any time.

Related Posts:

How to set up open-source DB monitoring using PMM?

How to Set Up PostgreSQL Monitoring with PMM

How to Set Up PostgreSQL Monitoring Using PMM

1. Prepare your environment.

Before you start setting up PostgreSQL to be connected to PMM monitoring, you need to have a PMM Client and register it on the PMM Server. Let’s do it.

You may set up the PMM Client in different ways. It can be either running in a Docker container or installed by a package manager or from sources. In this article, we will use the easiest way — we will install it from a repository using a package manager. Make sure you have curl, gnupb, wget, and sudo installed on your server. First of all, you need to add Percona repo to your repositories list.

For Debian-based OS a. Configure repositories.

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
dpkg -i percona-release_latest.generic_all.deb

b. Install the PMM-client package.

apt update
apt install -y pmm2-client

c. Check if it was installed correctly.

pmm-admin --version

For RedHat-based OS 1. Configure repositories.

yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm

b. Install the hpmm-client package.

yum install -y pmm2-client

c. Check if it was installed correctly.

pmm-admin --version

After you have installed the PMM Client, you should register it on your server. You need the HPMM server hostname or an IP-address and credentials for your connection. You may find this information on your HostedPMM private cabinet.

To register the PMM client on the server, just type:

pmm-admin config --server-insecure-tls --server-url=https://admin:<password>@XXX.XXX.XXX.XXX:443 YYY.YYY.YYY.YYY generic mynode

where XXX.XXX.XXX.XXX — IP address or hostname of PMM server, YYY.YYY.YYY.YYY — IP address or hostname of PMM client, generic — client type and mynode — mane of client. 2. Configure PostgreSQL First of all, you should create a user to monitor:

CREATE USER pmm WITH SUPERUSER ENCRYPTED PASSWORD '******';

After that, it is necessary to enable the possibility for the PMM user to log in locally to the PostgreSQL instance. To it, add a rule to /var/lib/pgsql/data/pg_hba.conf file like that (the second line is given just to comment field arguments, it is not necessary to add it)

local   all             pmm                                md5
# TYPE  DATABASE        USER        ADDRESS                METHOD

Reload the configuration:

su - postgres
psql -c "select pg_reload_conf()"

and check if the PMM user is able to log in:

psql postgres pmm -c "\conninfo"

When the PMM user is configured, the next step is to configure the database extension for connection to the PMM server. In this guide, we will use an official extension created by PostgreSQL pg_stat_statements. There is a new extension, created by Percona — pg_stat_monitor. It gives you additional features like bracket-based aggregation. But it is still in the beta phase and is unsupported. So you can use it at your own risk. You may install pg_stat_statement by your package manager On Debian-based OS

apt install -y postgresql-contrib

On RedHat-based OS

yum install -y postgresql-contrib

Add these lines to /var/lib/pgsql/data/postgresql.conf file

shared_preload_libraries = 'pg_stat_statements'
track_activity_query_size = 2048 # Increase tracked query string size
pg_stat_statements.track = all   # Track all statements including nested
track_io_timing = on             # Capture read/write stats

Restart the database server and install the extension:

psql postgres postgres -c "CREATE EXTENSION pg_stat_statements SCHEMA public"

Now you are ready to add postgresql service to your PMM server. To add an instance, run this command in your console:

pmm-admin add postgresql \
--username=pmm \
--password=pass \
--server-url=https://admin:admin@X.X.X.X:443 \
--server-insecure-tls

You may also do it in your PMM server user interface in Configuration -> PMM Inventory -> Add instance menu section.

Setup is Complete

Congratulations, you have successfully set up PostgreSQL monitoring with PMM. Now your database has powerful monitoring and performance features that will allow you to focus on other areas of your business. You can rest assured knowing that your database is being monitored and will give you live alerts if any big changes arise.

HostedPMM Services

Now that you have PostgreSQL and PMM set up, you can focus on bigger issues, like growing your business. If you need a server, HostedPMM offers hosted PMM servers for monitoring, automation, troubleshooting, and convenience for all your database needs. We provide servers for PMM to help your business. You can get your own PMM server, connect your instances, and focus on your project with HostedPMM. Our databases sync with a wide range of software to deliver a completely customized experience to suit your business needs.

Get started for free and enjoy more organized, secure, scalable, optimized, and adaptable database solutions. Our databases include in-depth troubleshooting and performance optimization. We offer quick setup, tech support, backups, scalability, security updates, high availability, and no vendor restrictions.

If you have any questions or need assistance with HostedPMM, contact us at support@hostedpmm.com at any time.

Related Posts:

How to set up open-source DB monitoring using PMM?

How to Set up MongoDB Monitoring with PMM

How to Set Up MongoDB Monitoring with PMM

1. Prepare your environment. Before you get started setting up MongoDB to connect to PMM monitoring, you need to have a PMM Client and register it on the PMM Server. Let’s do it.  You may set up the PMM Client in different ways. It can be either running in a Docker container or installed by a package manager or from sources. In this article, we will use the easiest way — we will install it from the repository using a package manager. Make sure you have curl, gnupb, wget, and sudo installed on your server. First of all, you need to add the Percona repo to your repositories list.  For Debian-based OS a. Configure repositories:

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb
dpkg -i percona-release_latest.generic_all.deb

 b. Install hpmm-client package:

apt update
apt install -y pmm2-client

c. Check if it was installed correctly:


pmm-admin --version

For RedHat-based OS 1. Configure repositories:

yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm

b. Install hpmm-client package:

yum install -y pmm2-client

c. Check if it was installed correctly:

pmm-admin --version

After you have installed the PMM Client, you should register it on your server. You need the HPMM server hostname or IP-address and credentials for your connection. You may find this information on your hostedpmm private cabinet.    To register the PMM client on the server, just type:

pmm-admin config --server-insecure-tls --server-url=https://admin:<password>@XXX.XXX.XXX.XXX:443 YYY.YYY.YYY.YYY generic mynode

  where XXX.XXX.XXX.XXX — IP address or hostname of PMM server, YYY.YYY.YYY.YYY — IP address or hostname of PMM client, generic — client type and mynode — mane of client. 2.  Configure MongoDB After MongoDB and the PMM client are installed, you need to have a user with proper privileges. You may add a user in the Mongo session:

db.getSiblingDB("admin").createRole({
    role: "explainRole",
    privileges: [{
        resource: {
            db: "",
            collection: ""
            },
        actions: [
            "listIndexes",
            "listCollections",
            "dbStats",
            "dbHash",
            "collStats",
            "find"
            ]
        }],
    roles:[]
})

db.getSiblingDB("admin").createUser({
   user: "pmm_mongodb",
   pwd: "password",
   roles: [
      { role: "explainRole", db: "admin" },
      { role: "clusterMonitor", db: "admin" },
      { role: "read", db: "local" }
   ]
})

If you want to use PMM Query Analytics, you should turn on the profiling feature. To make this setting permanent, edit /etc/mongod.conf adding operationProfiling section:

operationProfiling:
  mode: all
  slowOpThresholdMs: 200
 rateLimit: 100 # (Only available with Percona Server for MongoDB.)

Restart mongod service

systemctl restart mongod

After that, you are ready to connect your MongoDB to PMM monitoring. To do that run: 


pmm-admin add mongodb \
--username=pmm_mongodb --password=password \
--query-source=profiler --cluster=mycluster

You may add –environment and -custom-label options to set tags for the service to help identify them.

After successful adding, you may find your service on the PMM Server in Configuration -> PMM Inventory -> Inventory list or check on the client using the command:

pmm-admin inventory list services --service-type=mongodb

And now you should see your MongoDB on your monitoring panel.

HostedPMM Services

Now that you have MongoDB monitoring with PMM set up, you are free to focus on other important issues with your business. HostedPMM offers hosted PMM servers for monitoring, automation, troubleshooting, and convenience for all your database needs. We provide servers for PMM to help your business. You can get your own PMM server, connect your instances, and focus on your project with HostedPMM.

Get started with HostedPMM for free and enjoy more organized, adaptable, optimized, and secure database solutions. We offer quick setup, tech support, backups, scalability, security updates, high availability, and no vendor restrictions.

If you have any questions or need assistance with HostedPMM, contact us at support@hostedpmm.com for support.

Related Posts:

How to set up open-source DB monitoring using PMM?

PMM – SolarWinds (Vivid Cortex) Open Source alternative

PMM — SolarWinds (Vivid Cortex) Open Source alternative

What is Database Monitoring Tools and Why do You Need Them

Databases are often the core of most systems, and it is very important to know every moment if it is healthy and work properly.

When your project is not so big it is enough standard common monitoring tools. But when your bases grow you probably want to get more information about them. Database health is not the only thing you are interested now.

Such things like performance, queries, connections and other kinds of statistics may have more sense for you and can save your money (or rase your costs if you use your resources ineffectively).

So in that case you need some specific tools which are specially designed for monitoring databases.

Tools for Databases Monitoring 

One of the most popular tools for databases monitoring is VividCortex from SolarWinds. It is supplied as  SaaS (software as a service).

VividCortex allows you to monitor errors, latency, slow queries, replication health, max connections, disk health. It provides dashboards out of the box, and guides you on which metrics you should monitor.

Another feature is events. For example, if you are getting closer to the maximum of any threshold, id will emit an event. This approach removes ambiguity around which metrics are important to monitor.

The most advantage of VividCortex is SaaS platform. That means that you don’t need to have any infrastructure to host your monitoring services. You just install agents to your database hosts and that’s all. 

Another cool thing is 24/4 support and good training programs. That means it is needed minimum efforts from you to setup and manage your monitoring system. SolarWinds gives you 14 days of trial period to try the service. And the price is available on demand and not available in public domain. So, we can conclude VividCortex is a good enterprise product for big and loaded projects. 

But if you are not a big company and you don’t need such level of vendor support. Is there any open source alternatives for such powerful tool as VividCortex? Fortunately yes. There is

Percona Monitoring and Management.

It is an open source database monitoring, management and observability solution for databases. It is very powerful tool for database monitoring. PMM leverages Prometheus and provides dashboards about of the box.

Monitoring capabilities allow you to watch under slow queries, errors, replication lag, max connections, disk space etc.

Built in advisors run regular checks of potential security threats.

Performance degradation, data loss and data corruption.

PMM gives you several models of using. You may deploy it on your own infrastructure and manage it by your own team absolutely free. But if you need a professional support you may get it 24×7. And PMM is also available as SaaS.

To conclude, we can say that Percona Monitoring and Management is a good alternative for enterprise solutions like VividCortex. It gives you the almost the same powerful abilities to monitor and manage your databases.

It is free OpenSource product, so it will be suitable for not big companies and projects. But it PMM can be also distributed as SaaS with 24×7 support for enterprise world.

How Do We Use PMM for DB Cluster Monitoring – Real-Life Use Case

We would like to share our experience of using PMM for DB cluster monitoring. This is a great tool with useful dashboards that help us improve the availability and performance of our infrastructure.

We perform weekly reviews of our main cluster using these dashboards:

  1. PMM Query analytics:

The main concept on this screen is ‘Load’ which is the average query execution time multiplied by queries per sec, essentially this metric indicates how much MySQL resources are consumed. E.g. query that was running for 5 minutes but just a single run won’t make it to the top.

Usually, we look at the top 10 queries.

You can click on each query digest and see more details about it.

It helped us several times to detect inefficient queries and things when a query was executed a lot more often than it was anticipated.

  1. Nodes overview:

Supplementary graph that just displays basic OS metrics – usually we look here to examine moments of the peak load in order to determine the performance bottleneck.

  1. MySQL instances overview:

Same as (2) but holds MySQL-specific metrics.

  1. PMM Advisor checks:

This may be rather handy to be aware of vulnerabilities in your version, configuration flaws etc. Warnings in the screenshot above are about minor Percona Server version (”Current version is 8.0.22, the latest available version is 8.0.29.”).

  1. Our last check is PMM Query errors, but the representation there was inconvenient for us so we’ve replaced it with our own implementation which collects MySQL query errors each 2 seconds and injects them into Clickhouse for further inspection. The results look this way:

In the screenshot above we can see a lot of lock insert collisions (which is expected) and 15 DELETE deadlocks that are yet to be investigated.

How to enable SSL for PMM server

This article assumes you have already installed PMM on your machine.

Securing your web services by moving them to use HTTPS instead of plain HTTP is one of the best ways to make sure the exchange remains encrypted for all the parties involved in transporting data between the service and the client browser. Extra protection against man-in-the-middle (MITM) attacks is a welcome feature as well.

Your PMM instance is already secured with a self-signed certificate: you’d want to keep your credentials safe from eavesdropping. This is no help against MITM attacks, though, and not the best practice in general. To fully utilize HTTPS you will need to acquire a certificate issued by one of the global authority centers.

The process involves:

  1. Creating a DNS record for your PMM instance to be accessed over the web
  2. Obtaining an SSL/TLS certificate for the PMM instance
  3. Installing the certificate and verifying the setup
  4. (optionally) Hosting PMM behind a reverse proxy

We will use mypmm.example.com as an address we want our PMM to be available at, example.com being a domain under our control.

Creating a DNS record for the PMM instance with the address you prefer

You would need a new “type A” record pointing to the IP address that publicly serves your PMM:

A  1.1.1.1  mypmm.example.com.

Make sure the record works as expected by opening https://mypmm.example.com in a browser.

Obtaining an SSL/TLS certificate for your PMM instance

In case you have a wildcard certificate for “*.example.com”, you can skip this step and move on to the next one.

Otherwise, we recommend using Let’s Encrypt — a service that issues SSL/TLS certificates for free. https://letsencrypt.org/

The simplest way to use it is via a recommended tool named Certbot. https://certbot.eff.org/

Certbot is provided as a drop-in Snap package (“snapd” daemon), supported by most Linux distributions, but there are alternatives available on the website.

Here’s how you proceed with Snap. Update it first:

sudo snap install core; sudo snap refresh core

Install Certbot package:

sudo snap install --classic certbot

Link new binary to a place inside your $PATH variable:

sudo ln -s /snap/bin/certbot /usr/bin/certbot

You should have certbot command available to run in your terminal now. If you get “command not found” instead, restart your shell session by logging out.

Launch Certbot as a standalone Web server, temporarily binding it to TCP port 80. This will start the certificate acquistion:

sudo certbot certonly --standalone -d mypmm.example.com

You will be asked a number of questions on behalf of Let’s Encrypt. When the utility finishes its work, you will see a short summary.

Certificate is saved at: /etc/letsencrypt/live/mypmm.example.com/fullchain.pem
Key is saved at:         /etc/letsencrypt/live/mypmm.example.com/privkey.pem
This certificate expires on 2023-01-28.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.

“`

We got our certificate, time to install it.

Installing the certificate and verifying the setup

Replace the default self-signed certificates inside Docker image with your own:

docker cp -L /etc/letsencrypt/live/mypmm.example.com/cert.pem pmm-server:/srv/nginx/certificate.crt

docker cp -L /etc/letsencrypt/live/mypmm.example.com/chain.pem pmm-server:/srv/nginx/ca-certs.pem

docker cp -L /etc/letsencrypt/live/mypmm.example.com/privkey.pem pmm-server:/srv/nginx/certificate.key

Restart Docker container:

docker restart pmm-server

Check your new certificate by opening https://mypmm.example.com. If you were logged into PMM before, you might need to clear your browser cache or use incognito mode to immediately see the difference.

Hosting PMM behind a reverse proxy

There’s a number of things you have to keep in mind when setting up PMM behind a reverse proxy (such as NGINX). You will have to pass a number of HTTP headers and enable WebSocket support for the web interface to work properly.

NGINX configuration example that serves locally available PMM mypmm.local.net as a publicly accessible mypmm.example.com:

http {

	# [...skipped...]

	# WebSocket handling for proxied connections
	map $http_upgrade $connection_upgrade {
	default upgrade;
	'' close;
	}

	# [...skipped...]

	server {
		server_name mypmm.example.com;
		ssl_certificate /etc/letsencrypt/live/mypmm.example.com/fullchain.pem;
		ssl_certificate_key /etc/letsencrypt/live/mypmm.example.com/privkey.pem;

		listen 443 ssl;

		location / {
			proxy_set_header Origin           https://mypmm.local.net;
			proxy_set_header Host             mypmm.local.net;
			proxy_set_header Upgrade          $http_upgrade;
			proxy_set_header Connection       $connection_upgrade;
			proxy_set_header X-Real-IP        $remote_addr;
			proxy_set_header X-Forwarded-For  $proxy_add_x_forwarded_for;
			proxy_pass                        http://mypmm.local.net/;
		}
	}
}

In addition, your PMM instance might require gRPC communication with remote pmm-agents. Depending on your web server, this might not be possible.

How to Install PMM Using Linode

There are many different ways to install a PMM server. In this article, we will describe a solution based on a Linode VPS on which we will install the Docker Engine and run the PMM server in a Docker container. Linode, which is a blend of the words Linux and node, is a cloud hosting provider of Linux-based virtual private servers. Linode makes virtual computing more simple, affordable, and accessible.

How to Install PMM

These are the requirements you’ll need to begin:

  • Registered account on the Linode Cloud platform
  • Any SSH client
  • Any web browser with enabled JavaScript support

Creating Linode Instance

Login to Linode Cloud console.

https://cloud.linode.com/linodes

Click the “Create Linode” button on the top-right side of the page.

Choose distribution and region.

Choose type or VPS.

Set a custom name for the Linode instance, for example, PMM-demo.

Type the root password, and add SSH Keys of allowed users.

Other settings are optional, so if you don’t need them, then you may ignore them.

Click the “Create Linode” button.

Wait some time until the VM will be created and Provisioning status changes to Running.

After starting the virtual machine, we can connect to it via SSH.

Installing Docker Engine

First of all, let’s update the packages list:

apt update

Then install the necessary packages:

apt install ca-certificates curl gnupg lsb-release

Wait some time until the installation process is complete.

Now let’s add the GPG key that signed the packages from the repository:

curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Then you need to add the repository configuration:

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update the packages list again:

apt update

Install Docker Engine:

apt install docker-ce docker-ce-cli containerd.io docker-compose-plugin

Installing PMM Server

Pull image from Docker Hub

Most likely you have not yet downloaded the PMM-server image, so download it using the following command:

docker pull percona/pmm-server:2

Create persistent volume

The PMM server stores the collected data in databases integrated into the image (not the best practice, but it seemed more convenient to the authors of the image). I’m sure you don’t want to lose this collected data when updating the server, so you need to create a persistent volume, let’s call it PMM-data.

docker volume create pmm-data

Run PMM server in container with persistent volume

Now you can safely launch the container with the PMM server and proceed with its direct configuration.

docker run --detach --restart always \
  --publish 443:443 \
  -v pmm-data:/srv \
  --name pmm-server \
  percona/pmm-server:2

Check the status of running the PMM server

Let’s check the status of the PMM server – it should be fine, but just in case.

docker ps

Reset admin password

Now let’s change the password of the admin user to “new_pass”.

docker exec -t pmm-server bash -c 'grafana-cli --homepath /usr/share/grafana --configOverrides cfg:default.paths.data=/srv/grafana admin reset-admin-password new_pass'

Change admin email (optional)

For right now, it’s not possible to change the email of the admin user via PMM CLI, but you can do it via a simple API call. First, you need to generate a Bearer Token, which can be created from login/password pair with the help of base64 decoder:

echo -n "admin:new_pass" | base64

Execute a simple cURL request:

curl --location --request PUT "https://127.0.0.1/graph/api/users/1" \
  --insecure \
  --header "Accept: application/json" \
  --header "Content-Type: application/json" \
  --header "Authorization: Basic YWRtaW46bmV3X3Bhc3M=" \
  --data "{\"email\":\"ownemail@example.com\",\"login\":\"admin\"}"

Login to the admin panel

Now you can open the PMM server page through a browser.

https://<linode-instance-ip>/graph/login

You will most likely see an error about the self-signed certificate that was auto-generated on the first run of the PMM server. Just ignore this message. For the login/password pair, use the admin and password chosen on the “Reset admin password” step.

Installation Complete

Congratulations, you have successfully installed PMM using Linode. PMM – Percona Monitoring and Management – provides database monitoring and management services and helps streamline your business with convenient cloud computing solutions. PMM simplifies cloud computing and helps optimize databases by allowing you to easily find, fix, and prevent issues with scaling, bottlenecks, and potential outages.

If you want to know how to install PMM, there are many options. Linode offers a great option to install PMM. This PMM installation is based on a Linode VPS on which we install the Docker Engine and run the PMM server in a Docker container. To install PMM using Linode you just need a registered account on the Linode Cloud platform, any SSH client, and any web browser with enabled JavaScript support.

HostedPMM Services

Now that you have installed PMM, you can focus on bigger issues, like growing your business. HostedPMM offers hosted PMM servers for monitoring, automation, troubleshooting, and convenience for all your database needs. We provide servers for PMM as a service to you. You can get your own PMM server, connect your instances, and focus on your project with HostedPMM. Get started for free and say hello to simpler database solutions. We offer a quick setup, backups, scalability, tech support, security updates, high availability, and no vendor restrictions.

If you have any questions or need assistance with HostedPMM, contact us at support@hostedpmm.com for support.

Related Posts: How to set up open-source DB monitoring using PMM?

How to deploy PMM using AWS

Here is a small HowTo about installing the Percona Monitoring and Management (PMM) server using the AWS Marketplace.

Requirements:

  • AWS account
  • Web-browser with enabled JavaScript
  • Cup of coffee

For the first you need to open the AWS Marketplace page and search for “Percona Monitoring and Management Server” or just click to this direct link.

Here you click to Continue to Subscribe and you’ll be redirected to the Subscribe to this software page. On this page you may look at details about your subscription, additional taxes and etcetera.

Let’s click Continue to Configuration.

On the next page, called Configure this software you need to choose the version of PMM and region, then click Continue to Launch button.

On the Launch this software page you need to choose:

  • Choose action: Launch from website
  • EC2 Instance Type: Any type of instance that suits your needs, t2.medium will be used in this example

Scroll down to the VPC Settings block, here you can set:

  • Virtual Private Cloud (VPC) – If not exists, then need to create a new one
  • Subnet – For communication between instances in VPC
  • Security group – In settings of selected security group an Inbound and Outbound rules should have permissions for connection through SSH, HTTP and HTTPS protocols form any IP-addresses
  • Key pair – Need to create a new one if not exists

Click the Launch button when you are ready.

If everything is okay you’ll see a page like above, to check the initialization status go to the page EC2 Console.

From the Instances page you may copy an IP-address of the instance and connect to it via web-browser, ignore Self-Signed Certificate warning.

Here for login you need to input the ID of the instance from AWS Console, copy and paste it.

After that, the web-browser redirects you to the final setup page.

Here you need input username and password.

  • Default username: admin
  • Default password: admin

On the next step you may have to reset the default password or skip this step.

And we are finally in, congrats!