Popular Science Post: Let’s talk about the open source database MongoDB

This tutorial introduces you to the MongoDB database. You'll discover how to install the software, manipulate the data, and apply data design techniques to your own applications.

All examples were developed using MongoDB 5, but most will work in previous or later versions. Code can be entered directly into a client application or MongoDB shell (mongo or mongosh) to query and update the database.

What is MongoDB?

MongoDB is an open source NoSQL database . NoSQL means that the database does not use relational tables like traditional SQL databases .

There are a range of NoSQL database types, but MongoDB stores data in JavaScript-like objects called documents, whose contents look like this:

{

_id: "123",

name: "Craig"

}

Although MongoDB has become synonymous with the JavaScript-based framework Node.js , most frameworks, languages, and runtimes have official MongoDB database drivers , including Node.js , PHP, and Python . You can also choose a library like Mongoose, which provides higher-level abstraction or object-relational mapping (ORM) functionality.

Unlike SQL tables, there are no structural restrictions on what you can store in MongoDB. Data schema is not enforced. You can store anything you like where you like. This makes MongoDB ideal for more organic – or chaotic – data structures.

Consider a contact address book. Individuals often have multiple phone numbers. You could define three phone fields in one SQL table, but that would be too many for some contacts and too few for others. Eventually, you will need a separate phone list, which adds complexity.

In MongoDB, these phone numbers can be defined as an infinite array of objects in the same document.

{

_id: "123",

name: "Craig",

telephone: [

{ home: "0123456789" },

{ work: "9876543210" },

{ cell: "3141592654" }

]

}

Note that MongoDB uses a similar JavaScript object notation for data updates and queries, which may present some challenges if you are used to SQL.

Essentials of MongoDB

Before we discuss further, let us look at the features of MongoDB. We will use these words in this article.

  • Document . An individual object in a data store, similar to a record or row in a SQL database table.
  • Field . A single data item in a file, such as a name or phone number, similar to a SQL field or table column.
  • Collection . A set of similar files, similar to a SQL table. Although you can put all files into a collection, it's usually more practical to separate them into specific types. In a contact address book, you can have a collection of people and a collection of companies.
  • Database . A collection of related data, the same meaning as a SQL database. model. A schema defines the data
  • Schema . In a SQL database, you must define a table definition with related fields and types before storing data. This is not necessary in MongoDB, although it is possible to create a schema that validates the file before it is added to the collection.
  • Index . A data structure used to improve query performance, the same meaning as a SQL index.
  • Primary Key . A unique identifier for each document. MongoDB automatically adds a unique, indexed _id field to each document in the collection.
  • Denormalization . In SQL databases, "normalization" is a technique used to organize data and eliminate duplication. In MongoDB, "denormalization" is encouraged. You duplicate the data aggressively, and a single file can contain all the information needed.
  • JOINs . SQL provides a JOIN operator so that data can be retrieved from multiple normalized tables in a single query. In MongoDB, connections were not possible until version 3.6, and limitations still exist. This is another reason why data should be denormalized into self-contained files.
  • Transactions . When an update changes two or more values ​​on a document, MongoDB ensures that they all succeed
    or all fail. Updates that span two or more documents must be wrapped in a transaction. MongoDB supports transactions starting with version 4.0
    , but requires a multi-server replica set or sharded cluster. The installation example below uses
    a single server, so transactions are not possible.

How to install MongoDB

To use MongoDB on your local machine, you have three options. We'll walk you through each option.

Docker is a software management tool that can install, configure and run MongoDB or any other application in minutes.

Install Docker and Docker Compose , then create a project folder with a file called dockercompose.yml with the following content (note, the abbreviation is essential).

version: '3'

services:

mongodb:

image: mongo:5

environment:

- MONGO_INITDB_ROOT_USERNAME=root

- MONGO_INITDB_ROOT_PASSWORD=pass

- MONGO_INITDB_DATABASE=mongodemo

container_name: mongodb

volumes:

- dbdata:/data/db

ports:

- "27017:27017"

adminer:

image: dehy/adminer

container_name: adminer

depends_on:

- mongodb

ports:

- "8080:80"

volumes:

dbdata:

Access the folder from the command line and run.

docker-compose up

The latest version of MongoDB 5 will be downloaded and started. This takes a few minutes on first startup, but subsequent runs are much faster.

Please note:

  • A MongoDB administrator account is defined with the ID "root" and password "pass".
  • Data is saved between reboots in a Docker volume named dbdata.
  • Adminer database client is also provided .

You can use any MongoDB database client to connect to localhost:27017 using the ID "root" and password "pass". Alternatively, you can access Adminer at http://localhost:8080/ and log in with the credentials below.

  • System: MongoDB (alpha)
  • Server: host.docker.internal
  • Username: root
  • Password: pass

information

The server host.docker.internal works on both Mac and Windows devices running Docker Desktop. Linux users should use the device's network IP address rather than localhost (Adminer resolves this to its own Docker container).

Adminer login

Adminer login

Adminer allows you to inspect collections and files. However, it is important to note that collections are called "tables".

Adminer view

Adminer collection view

To run commands, you can use the MongoDB Shell ( mongosh) or the traditional  mongo command-line REPL (Read Evaluate Print Loop) environment.

Access the bash shell of the Docker MongoDB container.

docker exec -it mongodb bash

Then start the MongoDB shell with the ID and password:

mongosh -u root -p pass

(You can use the traditional  mongo command if you prefer.) Then you can publish

MongoDB commands, such as the following.

  • show dbs; — – Show all databases
  • use mongodemo; — use a specific database
  • show collections; — List collections in the database
  • db.person.find(); — List all documents in a collection
  • exit; — Exit/close shell

Shut down MongoDB by running the following command in the project directory:

docker-compose down
2. Use a cloud provider (no installation required).

You can use a managed MongoDB instance, so there is no need to install anything locally. An internet connection is essential and response speed will depend on the host and your bandwidth. Most services will charge a monthly and/or megabyte usage fee.

The host will usually provide details so you can manage the database remotely using MongoDB client software

3. Install MongoDB locally

MongoDB can be installed and configured on Linux, Windows, or Mac OS. There are two versions available:

  1. A commercial enterprise version
  2. An open source community version (used in this tutorial).

MongoDB's installation page provides instructions for various operating systems. Generally speaking:

Be sure to follow the instructions carefully so your installation will be successful!

How to access your MongoDB database

Now that your MongoDB database is installed, it's time to learn how to manage it. Let's discuss what you need to do in order to access and use your database.

1. Install a MongoDB client

Managing the database requires a MongoDB client application. If you are using a cloud or local installation, we recommend you install the command line mongosh MongoDB Shell .

Adminer is a web-based database client that supports MongoDB, although currently it is limited to checking collections. Adminer can be downloaded as a single PHP script, but if you install it using Docker , it is already set up.

The GUI client application provides a better interface for updating and checking data. There are several options, including the free and cross-platform MongoDB Compass :

MongoDB Compass

MongoDB Compass

Studio 3T , another GUI contender, offers a commercial application that grants limited functionality for free.

Studio 3T

Studio 3T

You can access your MongoDB database by using any of the following tools.

  1. Machine network name, URL or IP address ( localhost for local installations ).
  2. The port of MongoDB (default is 27017 ).
  3. A user ID and a password . A root user is usually defined at installation time.
2. Set and save database access credentials

The root administrator has unrestricted access to all databases. Generally speaking, you should use a custom user with specific permissions to limit access and increase security .

For example, the following command creates a user named myuser with the password mypass who has read and write permissions on the mydb database.

use mydb;

db.createUser({

user: "myuser",

pwd: "mypass",

roles: [

{ role: "readWrite", db: "mydb" }

]

});

How to insert new file in MongoDB

There is no need to define a database or collection before inserting your first document. Using any MongoDB client, simply switch to a database called mongodemo:

use mongodemo;

Then insert a single file into a new people collection:

db.person.insertOne(

{

name: 'Abdul',

company: 'Alpha Inc',

telephone: [

{ home: '0123456789' },

{ work: '9876543210' }

]

}

);

View the file by running a query to return all results for the people collection:

db.person.find({});

The result will be something like this:

{

"_id" : ObjectId("62442429854636a03f6b8534"),

name: 'Abdul',

company: 'Alpha Inc',

telephone: [

{ home: '0123456789' },

{ work: '9876543210' }

]

}
How to insert multiple files

You can insert multiple documents into a collection by passing an array to insertMany() . The following code creates additional person documents and a new company collection:

db.person.insertMany([

{

name: 'Brian',

company: 'Beta Inc'

},

{

name: 'Claire',

company: 'Gamma Inc',

telephone: [

{ cell: '3141592654' }

]

},

{

name: 'Dawn',

company: 'Alpha Inc'

},

{

name: 'Esther',

company: 'Beta Inc',

telephone: [

{ home: '001122334455' }

]

},

{

name: 'George',

company: 'Gamma Inc'

},

{

name: 'Henry',

company: 'Alpha Inc',

telephone: [

{ work: '012301230123' },

{ cell: '161803398875' }

]

},

]);

db.company.insertMany([

{

name: 'Alpha Inc',

base: 'US'

},

{

name: 'Beta Inc',

base: 'US'

},

{

name: 'Gamma Inc',

base: 'GB'

},

]);
Where does _id come from?

MongoDB automatically assigns an _id to each document in the collection. This is an ObjectID - a BSON (Binary Javascript Object Notation) value that contains:

  • Unix epoch (seconds) at creation (4 bytes)
  • Aa 5 byte machine/process ID
  • A 3-byte counter starting from a random value

This is the primary key for this document. This 24-character hexadecimal value is guaranteed to be unique among all documents in the database and cannot be changed once inserted.

MongoDB also provides a getTimeStamp() function so you can get the creation date/time of a document without having to explicitly set a value. Additionally, you can define your own unique _id value when creating the document.

Data denormalization

The record inserted above sets each user's company to a string such as "Alpha Inc". This is not recommended in normalized SQL databases:

  • It's easy to go wrong: one user is assigned to "Alpha Inc." and another is "Alpha Inc." (with the period character appended). They are considered different companies.
  • Updating a company name can mean updating many records.

The SQL solution is to create a company table and link a company to a person using its primary key (probably an integer). No matter how the company name changes, the primary key remains the same and the database can enforce rules to ensure data integrity.

Denormalization is encouraged in MongoDB. You should aggressively duplicate data so that a single file can contain all the information it needs. This has several advantages:

  • The document is self-contained and easier to read – no need to refer to other collections.
  • Write performance can be faster than SQL databases because fewer data integrity rules are enforced.
  • Sharding – or spreading data across multiple machines – becomes easier because there is no need to reference data in other collections.

Simple MongoDB query

You can list all documents in a collection, such as person, by using an empty find().

db.person.find({})

The count() method returns the number of documents (in our example, this number will be 7).

db.person.find({}).count();

The sort() method returns the files in any order you like, such as by name in reverse alphabetical order.

db.person.find({}).sort({ name: -1 });

You can also limit the number of files returned, for example, to find the first three names:

db.person.find({}).sort({ name: 1 }).limit(2);

You can search for specific records by defining a query on one or more fields, for example, locating files for all people whose first name is set to "Claire."

db.person.find({ name: 'Claire' });

Logical operators such as $and, $or, $not, $gt (greater than), $lt (less than), and $ne (not equal to) are also supported , for example, targeting all companies as "Alpha Inc " or "Beta Inc " personal files.

db.person.find({

$or: [

{ company: 'Alpha Inc' },

{ company: 'Beta Inc' }

]

});

In the database of this example, the same result can be obtained by using $nin (not) to extract all documents whose company is not  "Gamma Inc".

db.person.find({

company: { $nin: ['Gamma Inc'] }

});

The second value object in the find() method sets a projection that defines the fields returned. In this example, only the name is returned (note that document_id is always returned unless explicitly closed).

db.person.find(

{ name:'Claire' },

{ _id:0, name:1 }

);

the result is:

{

"name" : "Claire"

}

Through the $elemMatch query , you can find items in an array, such as all files in the phone array that have work items. The same $elemMatch can be used in projection, showing only the job number:

db.person.find(

{

telephone: { $elemMatch: { work: { $exists: true }} }

},

{

_id: 0,

name:1,

telephone: { $elemMatch: { work: { $exists: true }}}

}

);

the result is:

{

"name" : "Abdul",

"telephone" : [

{ "work" : "9876543210" }

]

},

{

"name" : "Henry",

"telephone" : [

{ "work" : "012301230123" }

]

}

Using cursors in MongoDB

Most database drivers allow query results to be returned as an array or similar data structure. However, this can cause memory issues if the collection contains thousands of files.

Like most SQL databases, MongoDB supports the concept of cursors Cursors allow an application to read query results one at a time before moving on to the next item or giving up the search.

Cursors can also be used from the MongoDB shell.

let myCursor = db.person.find( {} );

while ( myCursor.hasNext() ) {

print(tojson( myCursor.next() ));

}

How to create an index in MongoDB

This collection of names currently has 7 files, so there is no computational cost to any query. However, imagine you have a million contacts with names and email addresses. Contacts may be sorted by name, but email addresses will be in a seemingly random order.

If you needed to find a contact by email, the database would have to search up to a million items to find a match. Adding an index on the email field creates a lookup "table" where emails are stored in alphabetical order. Databases can now use more efficient search algorithms to target the right people.

As the number of documents increases, indexing becomes critical. In general, you should apply an index to any field that may be referenced in a query. You can apply an index to each field, but be aware that this will slow down data updates and increase the disk space required as reindexing becomes necessary.

MongoDB provides a series of index types.

single field index

Most indexes will apply to a single field, for example, indexing a name field in ascending alphabetical order:

db.person.createIndex({ name: 1 });

Using -1 will reverse the order. This is of little benefit in our case, but if you have a date field with the most recent event first, this might be practical.

In the example mongodemo database, three other indexes are useful:

db.person.createIndex( { company: 1 } );

db.company.createIndex( { name: 1 } );

db.company.createIndex( { base: 1 } );
Composite index on multiple fields

Two or more fields can be specified in an index, for example

db.person.createIndex( { name: 1, company: 1 } );

This can be useful when one field is frequently used with another field in search queries.

Multikey index of array or object elements

Files can be complex, often requiring indexing of fields deeper in the structure, such as work phone numbers:

db.products.createIndex( { 'telephone.work': 1 } );
wildcard index

Wildcards index every field in the document. This is often practical on smaller and simpler files, which can be queried in various ways:

db.company.createIndex( { '$**': 1 } );
Full text index

Text indexing allows you to create search engine-like queries that examine the text of all string fields and sort them by relevance. You can limit text indexing to specific fields:

db.person.createIndex( { name: "text", company: "text" } );

...or create a text index on all string fields:

db.person.createIndex( { "$**": "text" } );

The $text operator allows you to search this index, for example to find all files that reference "Gamma":

db.person.find({ $text: { $search: 'Gamma' } });

Note that full-text searches generally require five or more characters to return useful results.

Other index types

MongoDB provides several other specialized index types:

How to manage MongoDB indexes

Indexes defined on a collection can be checked with:

db.person.getIndexes();

This will return an array of results like:

[

{

"v" : 2.0,

"key" : { "_id" : 1.0 },

"name" : "_id_"

},

{

"v" : 2.0,

"key" : { "company" : 1.0 },

"name" : "company_1"

},

{

"v" : 2.0,

"key" : { "name" : 1.0 },

"name" : "name_1"

}

]

key “defines the fields and order, while “name” is a unique identifier for that index – e.g. “company_1” is the index for the company field.

The validity of the index can be checked by adding an .explain() method to any query , for example:

db.person.find({ name:'Claire' }).explain();

This returns a large dataset, but the "winningPlan" object shows the "indexName" used in the query:

"winningPlan" : {

"stage" : "FETCH",

"inputStage" : {

"stage" : "IXSCAN",

"keyPattern" : { "name" : 1.0 },

"indexName" : "name_1",

}

}

If necessary, you can discard an index by quoting its name:

db.person.dropIndex( 'name_1' );

Or use an index specification file:

db.person.dropIndex({ name: 1 });

The . dropIndexes() method allows you to drop more than one index in a single command.

Using MongoDB’s Data Validation Schema

Unlike SQL, data definition schemas are not required in MongoDB. You can publish any data to any document in any collection at any time.

This provides considerable freedom. However, sometimes you may want to insist on following the rules. For example, unless a file contains a name, it is impossible to insert it into a collection of people.

Validation rules can be specified using a $jsonSchema object , which defines an array of required items and properties for each validation field. The collection of people has already been created, but you can still define a pattern, specifying that a name string is required:

db.runCommand({

collMod: 'person',

validator: {

$jsonSchema: {

required: [ 'name' ],

properties: {

name: {

bsonType: 'string',

description: 'name string required'

}

}

}

}

});

Try inserting a file of a person without a name:

db.person.insertOne({ company: 'Alpha Inc' });

...the command will fail:

{

"index" : 0.0,

"code" : 121.0,

"errmsg" : "Document failed validation",

"op" : {

"_id" : ObjectId("624591771658cd08f8290401"),

"company" : "Alpha Inc"

}

}

You can also define a schema if you create a collection before using it . The following command implements the same rules as above.

db.createCollection('person', {

validator: {

$jsonSchema: {

required: [ 'name' ],

properties: {

name: {

bsonType: 'string',

description: 'name string required'

}

}

}

}

});

This more complex example creates a user collection, validating that a name, email address, and at least one phone number must be provided:

db.createCollection('users', {

validator: {

$jsonSchema: {

required: [ 'name', 'email', 'telephone' ],

properties: {

name: {

bsonType: 'string',

description: 'name string required'

},

email: {

bsonType: 'string',

pattern: '^.+\@.+$',

description: 'valid email required'

},

telephone: {

bsonType: 'array',

minItems: 1,

description: 'at least one telephone number required'

}

}

}

}

});

How to update existing file in MongoDB

MongoDB provides several update methods , including  updateOne()updateMany()and replaceOne(). These are all passed.

  • A filter object used to locate files to update
  • An update object – or an array of update objects – describing the data to be changed. An optional options object.
  • The most useful attribute is upsert, which inserts a new document without discovery.

The following example updates the file for the person whose name is set to "Henry". It removes the work phone, adds the home phone, and sets a new date of birth.

db.person.updateOne(

{ name: 'Henry' },

[

{ $unset: [ 'telephone.work' ] },

{ $set: {

'birthdate': new ISODate('1980-01-01'),

'telephone': [ { 'home': '789789789' } ]

} }

]

);

The following example updates the name document of a person whose name is set to "Ian". This name does not currently exist, but setting upsert to "true" will create it.

db.person.updateOne(

{ name: 'Ian' },

{ $set: { company: 'Beta Inc' } },

{ upsert: true }

);

You can run query commands at any time to check for data updates.

How to delete files in MongoDB

The update example above uses $unset to remove the work phone number from the document named "Henry". To delete an entire document, you can use one of several deletion methods , including deleteOne()deleteMany()and remove()(you can delete one or more).

Newly created files for Ian can be deleted with appropriate filters.

db.person.deleteOne({ name: 'Ian' });

Using aggregation operations in MongoDB

Aggregations are powerful but can be difficult to understand. It defines a series - or pipe - of operations on an array. Each stage of the pipeline performs an operation such as filtering, grouping, calculating, or modifying a set of documents. A stage can also use $lookup operations similar to SQL JOIN . The resulting file is passed to the next stage of the pipeline for further processing if necessary.

Aggregation is best explained with an example. We'll walk through building a query that returns the name, company, and work phone number (if available) of people who work at an organization in the United States.

The first operation is to run a $match to filter for US companies.

db.company.aggregate([

{ $match: { base: 'US' } }

]);

Returns like this:

{

"_id" : ObjectId("62442429854636a03f6b853b"),

"name" : "Alpha Inc",

"base" : "US"

}

{

"_id" : ObjectId("62442429854636a03f6b853c"),

"name" : "Beta Inc",

"base" : "US"

}

We can then add a new $lookup pipeline operator to match the company name (localField) with the company (foreignField) in the people (from) collection. The output will be appended to each company's file as an array of employees:

db.company.aggregate([

{ $match: { base: 'US' } },

{ $lookup: {

from: 'person',

localField: 'name',

foreignField: 'company',

as: 'employee'

}

}

]);

The result is this:

{

"_id" : ObjectId("62442429854636a03f6b853b"),

"name" : "Alpha Inc",

"base" : "US",

"employee" : [

{

"_id" : ObjectId("62442429854636a03f6b8534"),

"name" : "Abdul",

"company" : "Alpha Inc",

"telephone" : [

{ "home" : "0123456789" },

{ "work" : "9876543210" }

]

},

{

"_id" : ObjectId("62442429854636a03f6b8537"),

"name" : "Dawn",

"company" : "Alpha Inc"

},

{

"_id" : ObjectId("62442429854636a03f6b853a"),

"name" : "Henry",

"company" : "Alpha Inc",

"telephone" : [

{ "home" : "789789789" }

],

}

]

}

{

"_id" : ObjectId("62442429854636a03f6b853c"),

"name" : "Beta Inc",

"base" : "US",

"employee" : [

{

"_id" : ObjectId("62442429854636a03f6b8535"),

"name" : "Brian",

"company" : "Beta Inc"

},

{

"_id" : ObjectId("62442429854636a03f6b8538"),

"name" : "Esther",

"company" : "Beta Inc",

"telephone" : [

{ "home" : "001122334455" }

]

}

]

}

Now, a $project operation can delete all but the employees array. Subsequent $unwind operations can destroy the array and obtain independent employee files.

db.company.aggregate([

{ $match: { base: 'US' } },

{ $lookup: { from: 'person', localField: 'name', foreignField: 'company', as: 'employee' } },

{ $project: { _id: 0, employee: 1 } },

{ $unwind: '$employee' }

]);

the result is:

{

"employee" : {

"_id" : ObjectId("62442429854636a03f6b8534"),

"name" : "Abdul",

"company" : "Alpha Inc",

"telephone" : [

{ "home" : "0123456789" },

{ "work" : "9876543210" }

]

}

}

{

"employee" : {

"_id" : ObjectId("62442429854636a03f6b8537"),

"name" : "Dawn",

"company" : "Alpha Inc"

}

}

{

"employee" : {

"_id" : ObjectId("62442429854636a03f6b853a"),

"name" : "Henry",

"company" : "Alpha Inc",

"telephone" : [

{ "home" : "789789789" }

]

}

}

{

"employee" : {

"_id" : ObjectId("62442429854636a03f6b8535"),

"name" : "Brian",

"company" : "Beta Inc"

}

}

{

"employee" : {

"_id" : ObjectId("62442429854636a03f6b8538"),

"name" : "Esther",

"company" : "Beta Inc",

"telephone" : [

{ "home" : "001122334455" }

]

}

}

Finally, the $replaceRoot operation is used to format each file so that only the person's name, company, and work phone number are returned. Next is a $sort operation, which outputs the files in ascending name order. Complete aggregate query:

db.company.aggregate([

{ $match: { base: 'US' } },

{ $lookup: { from: 'person', localField: 'name', foreignField: 'company', as: 'employee' } },

{ $project: { _id: 0, employee: 1 } },

{ $unwind: '$employee' },

{ $replaceRoot: {

newRoot: {

$mergeObjects: [ {

name: "$employee.name",

company: '$employee.company',

work: { $first: '$employee.telephone.work' }

}, "$name" ]

} } },

{ $sort: { name: 1 } }

]);

the result is:

{

"name" : "Abdul",

"company" : "Alpha Inc",

"work" : "9876543210"

}

{

"name" : "Brian",

"company" : "Beta Inc",

}

{

"name" : "Dawn",

"company" : "Alpha Inc",

}

{

"name" : "Esther",

"company" : "Beta Inc"

}

{

"name" : "Henry",

"company" : "Alpha Inc"

}

There are other ways to achieve this result, but the point is that MongoDB does most of the work. It is rarely necessary to read documents and manipulate data directly in your application code.

How to run batch MongoDB operations

By default, MongoDB can handle 1,000 concurrent operations. This is unlikely to be an issue when using mongosh, but this limit can be reached if the application performs a series of data operations on a single record. Node.js applications are particularly problematic because they can quickly issue a series of asynchronous requests without waiting for them to complete.

To circumvent this problem, MongoDB provides a bulk operation API that accepts any number of updates, which can be performed sequentially or in any order.

Here is a pseudocode example for Node.js.

// reference the mycollection collection

const bulk = db.collection('mycollection').initializeUnorderedBulkOp();

// make any number of data changes

bulk.insertOne(...);

bulk.insertMany(...)

bulk.updateOne(...);

bulk.deleteOne(...);

// etc...

bulk.execute();

The last statement effectively makes a MongoDB request, so you have less chance of hitting the 1000 operations limit.

summary

MongoDB provides a flexible storage for applications such as content management systems, address books, and social networks, where rigid data structures are too rigid and difficult to define. Data writes are fast and sharding across multiple servers becomes easier.

Writing applications using the MongoDB database can also be liberating. It can store any data in any collection of documents at any time. This is especially practical when you're using agile methodologies to develop a prototype or minimum viable product, as requirements change over time.

That said, complex queries can be a challenge, and the concept of denormalization can be hard to swallow when you're migrating from the SQL world.

MongoDB is less suitable for applications with strict transactional requirements where data integrity is critical, such as banking, accounting and inventory control systems. These have identifiable data fields and should be designed before starting coding.

There are many application types between these two extremes, so choosing a suitable database becomes more difficult. Fortunately, NoSQL databases including MongoDB have begun to adopt SQL-like options, including JOINs and transactions.

In contrast, SQL databases such as MySQL and PostgreSQL now offer NoSQL-like JSON data fields. They may also be worthy of your attention, but as always, the final choice is yours.

Guess you like

Origin blog.csdn.net/weixin_44026962/article/details/135411207