Limited Period Offer - Upto 50% OFF | OFFER ENDING IN:0 D 0 H 0 M 0 S

Log In to start Learning

Login via

Post By AdminLast Updated At 2020-06-11
What is percona server in mongodb?

A sever plays a major role in serving the user needs of the people. Usually, people opt for servers to manage (or) store the data in large amounts. Today we people have many servers to process different kinds of apps to different people in the market.  Each one of those many servers that were available today has its own unique feature as well the importance in the market. For instance, in the case of MongoDB, percona acts as a server.

MongoDB acts as a perfect database for many application strategies. Today many companies opt for this database as an alternative to the relative databases. And this database acts as a solid foundation for supporting app development, testing, and deployment to the market. Usually, the database performs well, when it contains the underlying server for enhancing the read and write transactions. MongoDB utilizes the percona sever to deliver the app at high speed. Would you like to know how? Read the complete article to get an overview of the percona server.

What is percona sever?

It is the free-open source drop-in replacement for the MongoDB community. This platform is designed as an add-on to the base MongoDB server set up. This platform is responsible for delivering high performance, improved security at low costs. According to the statistics, this software has been downloaded more than 12000 times. Many database admins suggest that it is the right tool to support the free and open database management system software, to your application data in addition to the support and advice. This software is available to the users at free of cost.  It is responsible for enhancing data security. Besides, this platform also responsible for improving the databases. Moreover, this server on MongoDB supported by cluster control as an option for deployment.

Get more information on percona server with live examples at MongoDB Course

I hope you people have got the basic idea regarding  the percona server, now let us move forward to its features

Percona Server Features:

This extensive server has many extensive features. Those features were listed below

a) Hot Backups:

MongoDB percona server is responsible for creating the physical data back up on the running server. It does this without any noticeable operation degradation.  A database admin has usually done this, by running the create backup command as an administrator on the admin database and finally specifies the backup directory as shown below

> use admin

switched to db admin > db.runCommand({createBackup: 1, backupDir: "/my/backup/data/path"}) { "ok" : 1 }

when you execute the above code, if you observe ok:1, then your backup is successful. On the other hand, if you receive ok:0  then you can conclude that there is an error in backing up the database.

{ "ok" : 0, "errmsg" : "Destination path must be absolute" }

On the other hand, this MongoDB supports restoring the database copy. For this, you are responsible for stopping the MongoDB instance, initially.  And then, clean the database directory, copy the file from the directory and then restart the MongoDB service. And you can do all these using the following command.

$ service mongod stop && rm -rf /var/lib/mongodb/* && cp --recursive /my/backup/data/path /var/lib/mongodb/ && service mongod start

Besides, this MongoDB also supports you to store this copy in archive format.

> use admin
> db.runCommand({createBackup: 1, archive: "path/to/archive.tar" })

Do you want some more additional features? then scroll down to know further

||{"title":"Master in MongoDB", "subTitle":"MongoDB Certification Training by ITGURU's", "btnTitle":"View Details","url":"https://onlineitguru.com/mongodb-training.html","boxType":"demo","videoId":"PdSzLWSbYB8"}||

This platform allows you to backup to AWS S3 using the default setting(or) with some other external configuration. You can execute the following code in case of the default backup

> db.runCommand({createBackup: 1,  s3: {bucket: "backup", path: "newBackup"}})
Data-at-Rest Encryption:

Mongo DB version 3.2 introduced the data at rest encryption for the Wired Tiger storage engine. This percona MongoDB version ensures that data files can be decrypted and read the people only with the decryption key. This decryption key at rest in the percona server is introduced in version 3.6. Moreover, it allows you to apply this encryption when the data at rest is enabled.  Usually, the percona for MongoDB Enables uses the encryptionciphermode with the option two selective chiper modes. Those are

a) AES 256 -CBC(default cipher mode)

b)AES256-GCM

You can encrypt this data using the following command

$ mongod ... --enableEncryption --encryptionKeyFile 

people usually use the encryption file option to specify the path that contains the encryption key

$ mongod ... --enableEncryption --encryptionKeyFile 
Audit logging:

In every data management system, administrators should mandate the tracking of activities taking place. But in the case of the percona server for MongoDB, when auditing is enabled, the server generates an audit log file. This file contains the events of the different use such as authorization and authentication. So when you start the server with auditing enabled,  the logs would not be displayed dynamically during the run time. And this auditing in MongoDB community edition is available in two formats. Those are JSON and BSON formats.  Whereas in MongoDB percona Server, auditing is limited to only JSON formats. Here the server logs the important command on the contractor to MongoDB that logs everything. Usually, MongoDB percona is unclear in terms of the filtering syntax. Hence the enabling the audit log without a filter would offer more entries. Moreover, this platform allows you to write down your specifications.

Percona Memory Engine:

This is a special configuration of the wired stored engine.  This engine does not store user data on the disk.  In this engine, the data resides fully and is ready in the main memory except for the diagnostic data. This makes data processing much faster. And it also ensures that there is enough memory to hold the dataset and the server should not shutdown. And it allows the user to select the storage engine with the storage engine command. Moreover, the data created for one storage engine is not compactable with the other storage engines.  This is because each storage engine has its own model.  For instance, when you select the in-memory storage engine you must initially stop any running engines and then issue the following commands

$ service mongod stop
$ mongod --storageEngine inMemory --dbpath 

And if you already have some data with MongoDB community edition,  and you would like to move to percona  memory engine, then execute the below commands

$ mongodump --out 
$ service mongod stop
$ rm -rf /var/lib/mongodb/*
$ sed -i '/engine: .*inMemory/s/#//g' /etc/mongod.conf
$ service mongod start
$ mongorestore 
External LDAP authentication with SASL:

Whenever a client makes a read (or) write request to MongoDB mongob instance, used to initially authenticate the MongoDB server user database initially.  This external authentication allows the MongoDB server to verify the client credentials against the separate service. This external authentication proceeds as shown below percona | OnlineITGuru

a) LDAP server that stores all the user credentials remotely

b)SASL daemon that is used as the MongoDB server local proxy for the remote LDAP service.

c)SASL library that creates the necessary authentication for Mongo BD Client and server.

The authentication sequence proceeds as shown below

1)The Client gets connected to the MongoDB running instance and creates the plain authentication request using the sasl library.sas

2)This authentication is sent to the server as the special MongoDB command. This is received by the MongoDB server when it requests the payload.

3) The server creates the SASL sessions derived with the client credentials using its own reference to the sasl library.

4) The MongoDB server passes the auth payload to the SASL library which handovers to the saslauthd daemon.  The daemon passes it to the LDAP and awaits for the YES (or) NO response depending upon the authentication request. Moreover,  it is responsible for ensuring the user credentials.

5)The saslauthd passes this response to the MongoDB server through the SASL library.  This then authenticates (or) rejects the request accordingly.

Percona MongoDB operator

The Percona Kubernetes operator for MongoDB automates the various processes. Such as building, deletion, or modification of nodes within the user cluster ecosystem. It can also be useful for presenting an instance of a new cluster database. Further, it is useful for scaling up the cluster of the existing database.

Moreover, this operator also includes the essential Kubernetes settings that offer a regular and exact Percona MongoDB replica set. The proper documentation details on Percona Operator for the Server for MDB can give the basic idea.

The features that support here are;

Cluster Scaling:

It allows us to modify the size of the cluster parameter so that we can add/remove replica set members. The basic suggested size for a functioning replica set is a minimum of three.

Monitoring:

To monitor the Percona Server for MongoDB replica set, the deployment of PMM or Percona Monitoring and Management is easier. Moreover, the installation process uses Helm which is a Kubernetes package manager.

Backup Automatically: 

To run an on-demand backup or to schedule backup, users can configure automatic backups for their clusters. Furthermore, these backups execute with the help of PBM or Percona Backup for MongoDB. In this, the data is stored on the local PVs or any cloud platform.

Percona Exporter

The latest version of Percona MongoDB exporter releases with the PMM 2.10.0 for Prometheus. This latest exporter version is a complete rewrite including the latest approach that collects and reveals metrics. All these collections get from the MongoDB diagnostic commands.

The earlier MongoDB exporter branch only reveals the static list of selected metrics with traditional names and labels. But the latest exporter has totally a different approach in this regard. It reveals all the available metrics that the framework returns through internal diagnostic commands. Also, the metric renaming follows the strong rules that apply to all metrics. 

In the latest exporter of this framework, the approach to reveal all metrics is to cross the outcome of the diagnostic commands. Such as getDiagnosticDatalooking for revealing values. Here, in this regard, we have server status, and within this we found asserts, and within asserts, there are metrics to reveal. Such as regular, msg, rollovers, users, etc.

||{"title":"Master in MongoDB", "subTitle":"MongoDB Certification Training by ITGURU's", "btnTitle":"View Details","url":"https://onlineitguru.com/mongodb-training.html","boxType":"reg"}||

Benefits of using the new exporter

There are some useful benefits of using the new exporter in this regard. Earlier, the new exporter usually collects all the available metrics. Now, it’s easier to accumulate the latest metrics within PMM dashboards. Moreover, the MongoDB latest version reveals new metrics. They will now automatically now available and no need to manually add the metrics to upgrade the exporter.

Further, there is a special type of compatibility mode that it allows. In this case, the old exporter metrics also get revealed along with the new ones. Thus, the current dashboards will work as usual with the need for any modification.

The further benefits of this new exporter are -

Allowing compatibility mode

Debugging

Releases, etc.

Thus, we can say that the new exporter usage doesn’t affect the current dashboard. Here the compatible mode exposes both the old and new metrics.

Conclusion

Hence, I hope you people have got a brief idea regarding the LDAP Authentication with SASL. Moreover,  you people can get more practical knowledge when you enroll for MongoDB training. Our experts will provide the practical knowledge of all these  by live experts with real-world examples