Complete Guide to MongoDB Downgrade: Safely Transitioning from Version 7.0 to 6.0

Mydbops
Nov 23, 2023
25
Mins to Read
All

In the ever-evolving landscape of database management, staying adaptable is key to maintaining a robust and efficient system. MongoDB introduces new versions with enhanced features and functionalities. However, there are instances where the need arises to take a step back, and this guide is crafted to navigate precisely that scenario.

This comprehensive guide serves as your roadmap for the safe and strategic downgrade of your MongoDB deployment, specifically from version 7.0 to version 6.0.

Why Downgrading Your MongoDB Version Can Be Crucial?

Downgrading becomes necessary for the following reasons:

  • Compatibility: The application or drivers used with MongoDB might not yet support version 7.0. Downgrading ensures compatibility with existing components.
  • Stability: Newer versions of MongoDB might have issues or bugs that affect the stability of the database. Downgrading can help avoid these problems.
  • Feature Removal: MongoDB might deprecate or remove certain features or functionality in newer versions. Downgrading can retain access to those features.
  • Testing: The application might need extensive testing with a new MongoDB version. Downgrading temporarily can give you more time to conduct thorough tests.
  • Performance: In some cases, a specific use case or workload might perform better with an older version. Downgrading can help maintain optimal performance.

However, downgrading should be a carefully planned and executed process, as it can involve data migration and potential data loss if not done correctly.

Pre-Downgrade Steps

Prerequisites & Pre-Validations

  • If authentication is enabled, root user access is required.
  • Ensure compatibility with the OS version (validate).
  • Internet connectivity must be available on the server.
 
curl -v telnet://repo.mongodb.org:443
	

Take a copy of the config file

If you are using the default configuration file, it is advisable to create a backup copy of the mongod.conf file before proceeding with the downgrade. This precautionary step ensures that you have a safe backup of your configuration settings in case you need to revert to the original configuration in the future.

 
sudo cp /etc/mongod.conf /etc/mongod_copy.conf
	

Take a copy of the service file

  • If you are using the default service file, it is recommended to create a copy of the mongod.service file before initiating the downgrade process.
  • This safeguard allows you to preserve the original service file settings, providing an option to restore them if necessary in the future.
 
sudo cp /lib/systemd/system/mongod.service ~/
	

Take a copy of 7.0 mongod packages

  • To safeguard against potential issues when downgrading from MongoDB version 7.0 to version 6.0 and to ensure compatibility, it's advisable to create a backup of the 7.0 MongoDB packages.
 
# Identify the location of your current “mongod” binary:
> which mongod
/usr/bin/mongod

# Create a directory & copy the MongoDB 7.0 packages:
> mkdir ~/mongo_7
> sudo cp /usr/bin/mongo*  ~/mongo_7/

# Duplicate the MongoDB service file to create a version 7 service file:
> sudo cp /lib/systemd/system/mongod.service /lib/systemd/system/mongod_V7.service

> vi /lib/systemd/system/mongod_V7.service 

# Update the ExecStart parameter as follows
ExecStart=/home/ubuntu/mongo_7/mongod --config /etc/mongod_copy.conf
	
  • This backup will be helpful for troubleshooting purposes and can be used to address any compatibility challenges without having to perform another full package update.

Data Backup

  • If you have a daily backup configuration in place, you can skip this step. However, it is strongly recommended to take a backup before proceeding with the downgrade process.
  • In cases where the data size is exceptionally large, using a disk snapshot is a recommended and straightforward method for ensuring data safety.
  • If you intend to downgrade a replica set, we strongly recommend following the rolling method for the downgrade. This approach provides a safety net, as it makes it easy to roll back in case any issues or data corruption occur during the downgrade process.

Remove Backward-Incompatible FeaturesMongoDB 7.0 includes features that are not compatible with earlier releases. Downgrading from 7.0 to an earlier release requires validating and removing the backward incompatibility features from the MongoDB deployment. Addressing Compound Wildcard Indexes

  • Compound Wildcard Indexes in MongoDB are unique because they use the $** wildcard operator to index all fields within subdocuments and arrays.
  • These indexes provide flexibility in querying, as they allow you to search within deeply nested data structures without specifying exact paths to fields.
  • However, in MongoDB version 6.0, these indexes are incompatible.
  • MongoDB does permit setting the feature compatibility version (FCV) to 6.0 even with existing compound wildcard indexes.
  • When you replace a MongoDB 7.0 package with version 6.0, problems arise. Specifically, the mongod service refuses to start.
  • In such cases, the mongod log captures an error message stating Found an invalid index.
 
{"t":{"$date":"2023-09-14T18:19:00.386+00:00"},"s":"F",  "c":"INDEX",    "id":28782,   "ctx":"initandlisten","msg":"Found an invalid index","attr":{"descriptor":{"v":2,"key":{"productId":1,"productDetails.$**":1},"name":"productId_productDetails_index"},"namespace":"Information.movies","error":"CannotCreateIndex: bad index key pattern { productId: 1, productDetails.$**: 1 }: wildcard indexes do not allow compounding"}}
	

In light of this problem, it's strongly recommended to address it by dropping the compound wildcard indexes before proceeding with the downgrade from MongoDB version 7.0 to version 6.0.

Identification & Drop

This JavaScript code defines a function, searchCompWildcardIndexes, which searches for compound wildcard indexes in MongoDB version 7.0 databases. It iterates through databases, collections, and indexes, checking for indexes with keys containing $**. If such indexes are found, it records them in an output array and prints the results.

Snippet

 
searchCompWildcardIndexes = function () {
    var output = [];
    
    db.getMongo().getDBNames().forEach(function (dbname) {
        if (dbname != "admin" && dbname != "config" && dbname != "local") {
            db.getSiblingDB(dbname).getCollectionInfos().forEach(function (collInfo) {
                var cname = collInfo.name;
                var collType = collInfo.type;
                
                if (typeof collType == 'undefined' || collType == "collection") {
                    if (cname != "system.profile" && cname != "system.js" && cname != "system.namespaces" && cname != "system.indexes" && cname != "system.views") {
                        var wcIndex = [];
                        
                        db.getSiblingDB(dbname).getCollection(cname).getIndexes().forEach(function (i) {
 if (Object.keys(i.key).length > 1) {
                                Object.keys(i.key).toString().search(/\$\*\*/) > -1 ? wcIndex.push(i) : false;
                            }
                        });
                        
                        if (wcIndex.length > 0) {
                            output.push({
                                dbName: dbname,
                                collection: cname,
                                compoundWildcardIndexes: wcIndex
                            });
                        }
                    }
                }
            });
        }
    });
    
    if (output.length > 0) {
        printjson(output); 
    } else {
        print("There are no wildcard indexes!");
    }
}
	

Sample Output

 
test> searchWildcardIndexes()
[
  {
    dbName: 'Information',
    collection: 'movies',
    compoundWildcardIndexes: [
      {
        v: 2,
        key: {
          productId: 1,
          'productDetails.$**': 1
        },
        name: 'productId_productDetails_index'
      },
      {
        v: 2,
        key: {
  productId: 1,
          '$**': 1
        },
        name: 'productId_productDetails_index1',
        wildcardProjection: {
          'productDetails.name': 1,
          'productDetails.color': 1
        }
      }
    ]
  }
]
	
 
# Drop Index Command

db.getSiblingDB('').getCollection('').dropIndex('')
	

Dealing with TTL Indexes with Partial Indexes

Starting from MongoDB 6.3 (Rapid version), time series collections introduce TTL indexes with a partial filter option. While this feature offers notable benefits, it's important to note that it can introduce compatibility issues with earlier versions of MongoDB.

Script

 
partialTTLIndexes = function () {
    var output = [];

    db.getMongo().getDBNames().forEach(function (dbname) {
        db.getSiblingDB(dbname).getCollectionInfos().forEach(function (collInfo) {
            if (collInfo.type == 'timeseries') {
                var partialTTLIndex = [];
                db.getSiblingDB(dbname).getCollection(collInfo.name).getIndexes().forEach(function (i) {
                    if (i.expireAfterSeconds && i.partialFilterExpression) {
                        partialTTLIndex.push(i);
                    }
                });
                if (partialTTLIndex.length > 0) {
                    output.push({
                        dbName: dbname,
                         collection: collInfo.name,
                        partialTTLIndexes: partialTTLIndex
                    });
                }
            }
        });
    });

    if (output.length > 0) {
        printjson(output);
    } else {
        print("There are no partial TTL indexes!");
    }
}
	

Output

 
mydbops> partialTTLIndexes()
[
  {
    dbName: 'mydbops',
    collection: 'weather24h',
    partialTTLIndexes: [
      {
        v: 2,
        key: {
          timestamp: 1
        },
        name: 'timestamp_1',
        partialFilterExpression: {
          sensor: {
            '$eq': '40.761873, -73.984287'
          }
        },
        expireAfterSeconds: 3600
      }
    ]
  }
]
	

Note: Dropping an index is a significant operation, so ensure you have identified the correct index to be dropped.

 
# Drop Index Command

db.getSiblingDB('').getCollection('').dropIndex('')
	

FYI: If these indexes are not dropped, then MongoDB is not allowed to set the FCV to 6.0.

 
test> db.adminCommand({ setFeatureCompatibilityVersion: '6.0', confirm: true})

MongoServerError: Cannot downgrade the cluster when there are secondary TTL indexes with partial filters on time-series collections. Drop all partial, TTL indexes on time-series collections before downgrading. First detected incompatible index name: 'timestamp_1' on collection: 'mydbops.weather24h'
	

Managing Time Series Collections with Bucketing Parameters

In MongoDB 6.3 and higher, instead of granularity, you can set bucket boundaries manually using the two custom bucketing parameters. Consider this approach if you need the additional precision to optimize a high volume of queries and insert operations. But it is one of the in-compatible features for downgrading the mongoDB.

Identify

Find the collections with custom bucketing parameters as follows:

 
timeSeriesWithBucket = function () {
	var output = [];

	db.getMongo().getDBNames().forEach(function (dbname) {
		db.getSiblingDB(dbname).getCollectionInfos().forEach(function (collInfo) {
			var timeSerieswithBounds = [];
			if (collInfo.type == 'timeseries') {
				if (!collInfo.options.timeseries.granularity) {
					timeSerieswithBounds.push(collInfo.name);
				}
			}
			if (timeSerieswithBounds.length > 0) {
				output.push({
					dbName: dbname,
					timeSeriesCollectionsWithBounds: timeSerieswithBounds,
				});
			}
		});
	})
	if (output.length > 0) {
		printjson(output);
	} else {
		print("There are no time Series Collections with bucketing parameters!");
	}
}
	

Output

 
red [direct: primary] test3> timeSeriesWithBoundaries()
[
  {
    dbName: 'test3',
    timeSeriesCollectionsWithBounds: [
      'weather24h'
    ]
  }
]
	

Solution

By using the collmod option we can change the collection parameters.

 
db.runCommand({
   collMod: "weather24h",
   timeseries: { granularity: "seconds" || "minutes" || "hours" }
})
	

If you are using the custom bucketing parameters bucketRoundingSeconds and bucketMaxSpanSeconds instead of granularity, include both custom parameters in the collMod command and set them to the same value as default (MongoDB document).

 
db.runCommand({
   collMod: "weather24h",
   timeseries: {
      bucketRoundingSeconds: "86400",
      bucketMaxSpanSeconds: "86400"
   }
})
	

Issue

If we did not take the required actions on these collections While setting the FCV to 6.0 these collections will through the following error.

 
red [direct: primary] test3> db.adminCommand({ setFeatureCompatibilityVersion: '6.0',confirm: true})
MongoServerError: Cannot downgrade the cluster when there are time-series collections with custom bucketing parameters. In order to downgrade, the time-series collection(s) must be updated with a granularity of 'seconds', 'minutes' or 'hours'. First detected incompatible collection: 'test3.weather24'
	

Config Server Validation in Shard

In a sharded cluster, if any collections in the config servers have the changeStreamPreAndPostImages feature enabled, it is incompatible for downgrading, and modifying collections in a config server is not recommended.

Issue

When setting the Feature Compatibility Version (FCV) to 6.0, if the required actions are not taken on these collections, they will throw the following error:

 
[direct: mongos] test> db.adminCommand({ setFeatureCompatibilityVersion: '6.0',confirm: true})
MongoServerError: Cannot downgrade the config server as collection tech.Reports has 'changeStreamPreAndPostImages' enabled. Please unset the option or drop the collection.
	

Identify

​​To identify collections with the changeStreamPreAndPostImages parameter in the config cluster, execute the following code:

 
configValidation = function () {
	var output = [];

	db.getMongo().getDBNames().forEach(function (dbname) {
		db.getSiblingDB(dbname).getCollectionInfos().forEach(function (collInfo) {
			var collections = [];
			if (collInfo.options.hasOwnProperty('changeStreamPreAndPostImages') && collInfo.options.changeStreamPreAndPostImages.enabled == true) {
				collections.push(collInfo.name);
			}
			if (collections.length > 0) {
				output.push({
					dbName: dbname,
					changeStreamPreAndPostImagesCollections: collections,
				});
			}
		});
	})
	if (output.length > 0) {
		printjson(output);
	} else {
		print("There are no Collections with 'changeStreamPreAndPostImages' parameters!");
	}
}
	

Output

 
[
  {
    dbName: 'tech',
    changeStreamPreAndPostImagesCollections: [
      'Reports'
    ]
  }
]
	

Solution

To resolve this issue, you can use the collMod option to change the collection parameters:

This will disable the changeStreamPreAndPostImages feature for the Reports collection.

 
db.getSiblingDB("").runCommand({
   collMod: "",
   changeStreamPreAndPostImages: { enabled: false  }
})
	

Handling Collections with Queryable Encryption

As you embark on the journey of downgrading your MongoDB from version 7.0 to 6.0, it's essential to address the intricacies of queryable encryption (QE) features. While MongoDB 7.0 introduced QE as a powerful data security tool, downgrading might necessitate some strategic decisions regarding its usage.

Note: Queryable encryption is exclusive to replicaset architectures. For standalone deployments, you can skip this step as it's not applicable.

Sample Encrypted Document

Below is the sample document in the collection for which the queryable encryption field is enabled.

 
> db.getSiblingDB('medicalRecords').patients.findOne()
{
  _id: ObjectId("65046372b8398cc2b2126ce6"),
  firstName: 'ben',
  age: 21,
  lan: 'english',
  patientId: Binary.createFromBase64("Dq1RHHocgEdmiSEpfq2gQboQLj5AIPSYGhFrL8NE1UtNv4RzcS87evX1WP5z76SwcpjtTIxp9oCyQTZWWIK3eFxMg7FrEpXx5SVWefkp7bzqLrkobiThcHkuZz9SKFL3sagAwpYIFKGfWFtFGcw9R8ARPzR1anvDclov440dJr707/HkU7bAIroTeVTBiu17gJ+CJTvI0udAmrGtjV+5IOAJKfnJScX+hSutM1zeMOuoWGjOB4u1CcT+fgh8w3yB7R9wSgL9xSUsE4PKFQR5fQau", 6),
  medications: Binary.createFromBase64("ECULVnx4R0hgkXil/OnbjAQEVkPKHjJAJYxflbCbELZNz3lK1icgcvzwTg46BNtWoC9FfeR8gMcpMIlCOMXEorJeKOcFHNUTQKwKR2pVVN5UTLoNSkAhMLRHlfqUXjYR9VKwrsrsqK22jTPmCCAYmdpq", 6),
  __safeContent__: [
    Binary.createFromBase64("giU7yNLnQJqxrY1fuSDgCSn5yUnF/oUrrTNc3jDrqFg=", 0)
  ]
}
	

Issues if the QE feature is not disabled before downgrading

Issue-I: Compatibility Error

If you attempt to keep QE features active in MongoDB 6.0, you'll encounter a configuration error. While existing data remains readable in decrypted form, inserting new data becomes problematic.

 
pymongo.errors.ConfigurationError: Driver support of Queryable Encryption is incompatible with server. Upgrade server to use Queryable Encryption. Got maxWireVersion 17 but need maxWireVersion >= 21 (MongoDB >=7.0)
	

Issue-II: Insertion Errors

If you attempt to keep QE features active in MongoDB 6.0, you'll encounter a configuration error. While existing data remains readable in decrypted form, inserting new data becomes problematic.

 
pymongo.errors.ServerSelectionTimeoutError: No replica set members match selector "", Timeout: 30s, Topology Description: ]>
	

How to know which collection has enabled the QE feature?

Before tackling QE issues during a downgrade, you need to identify which collections have QE features enabled. This information is crucial for determining the appropriate actions to take.

Script:

Here's a JavaScript script to help you find collections with QE (Queryable Encryption) enabled:

 
showEncrpColl = function () {
    var output = [];
    db.getMongo().getDBNames().forEach(function (dbname) {
        db.getSiblingDB(dbname).getCollectionInfos().forEach(function (collInfo) {
            if (collInfo.options.encryptedFields) {
                output.push(dbname + "." + collInfo.name);
            }
        })
    })
    return output;
}
	

Output:

 
[
  'medicalRecords.patients',
  'test1.patients',
]
	

Resolving QE Issues During Downgrading MongoDB

Before proceeding, consult with the application team to determine if they require the data and if the collection should be converted into a regular collection.

 
Understanding the Dilemma: First and foremost, it's crucial to acknowledge that QE features won't function at the application level if you decide to downgrade to MongoDB 6.0. This realization prompts a critical conversation with your application team.
	

Case-I: Dropping Collections

In scenarios where QE features and the associated data are deemed unnecessary, you can safely drop the collections. MongoDB provides a straightforward command for this purpose:

 
db.getSiblingDB('').getCollection('').drop()
	

Sample Output:

 
medicalRecords> show collections
patients
enxcol_.patients.ecoc
enxcol_.patients.esc

medicalRecords> db.getSiblingDB('medicalRecords').patients.drop()
true

medicalRecords> db.getSiblingDB('medicalRecords').getCollectionNames()
[]
	

FYI: This command effectively removes the specified collection, including its metadata collections.

Case-II: Transitioning Queryable Encryption to Standard Collections

If the application team agrees to make the necessary changes at the application level and convert these collections into regular collections, consider the following:

Note: Despite having root privileges and access to encrypted keys, it's impossible to export encrypted data in a decrypted format using standard MongoDB tools. Decryption is only feasible through dedicated drivers.

Solution: The solution is to re-insert the decrypted data into another collection through the driver.

Precautions

  • At the application level, disable any functionalities reliant on collections with queryable encryption.
  • Keep in mind that while this procedure won't affect other collections or features, there might be temporary performance delays.
  • Therefore, it's advisable to schedule this operation during non-production hours.

Recommended Action Plan:

  • Snapshot Data: Begin by taking a disk snapshot of your MongoDB data to ensure data integrity.
  • Disable Features: At the application level, disable any functionalities that rely on collections with queryable encryption.
  • Isolate a Secondary Node: Select one of the secondary nodes in your replica set and isolate it by running it as a standalone member. This step ensures that you have an isolated environment for the following operations.
  • Code Modifications: Adapt your codebase as needed to accommodate the data migration. The goal is to read the encrypted data and insert it into another collection in a human-readable format on the isolated node.
  • Data Insertion: Execute the modified code to insert the decrypted data into the new collection on the standalone node. This ensures that you have a readable copy of your data.
  • Data Backup and Restore: Once the insertion is complete and verified, you can proceed to dump and restore this data back to the exact replica set. This step ensures that your data is back in its original environment.
  • Consider Large Collections: If you're dealing with exceptionally large collections, consider adjusting your code at the driver level. You can read the data from the standalone node and then point the insert operation to the replica set. However, be aware that this approach may impact both Linux and MongoDB resource usage, so proceed with caution.

This structured approach minimizes disruption to your application's performance while ensuring a smooth transition away from queryable encryption.

Driver Code

Below is the Python script that demonstrates the action plan for migrating data from collections with queryable encryption to normal collections, using the PyMongo driver. This script assumes you have a standalone MongoDB node set up and have made the necessary preparations, including disabling features reliant on queryable encryption.

Note: Remember to tailor the script and database names according to your specific setup and requirements.

 
import os
import asyncio
import motor.motor_asyncio
from pymongo.encryption_options import AutoEncryptionOpts

async def run_aggregation_async(coll, motor_client, destinationDB, destinationColl):
    async for doc in coll.aggregate([{"$project": {"__safeContent__": 0}}]):
        await motor_client[destinationDB][destinationColl].insert_one(doc)

async def main():
    # Define your MongoDB connection URI
    connection_uri = "mongodb://127.0.0.1:27018/"
    path = "./master-key.txt"  # Local Master Key Path
    sourceDB = 'medicalRecords'
    sourceColl = 'patients'
    key_vault_namespace = "encryption.__keyVault"  # Data Encryption Key Location
    destinationDB = 'dcMedications'
    destinationColl = 'patients'
 with open(path, "rb") as f:
        local_master_key = f.read()

    kms_providers = {
        "local": {
            "key": local_master_key
        },
    }

    key_vault_db_name, key_vault_coll_name = key_vault_namespace.split(".", 1)
    
    # Initialize a Motor client
    client = motor.motor_asyncio.AsyncIOMotorClient(connection_uri)
    key_vault = client[key_vault_db_name][key_vault_coll_name]

    opts = AutoEncryptionOpts(
        kms_providers,
        key_vault.full_name,
        bypass_query_analysis=True,
    )

    # Initialize a Motor client for encrypted data
    motor_client = motor.motor_asyncio.AsyncIOMotorClient(connection_uri, auto_encryption_opts=opts)
    db = motor_client[sourceDB]
    coll = db[sourceColl]

    # Aggregate and insert documents asynchronously
    await run_aggregation_async(coll, motor_client, destinationDB, destinationColl)

if __name__ == "__main__":
    asyncio.run(main())
	

After executing the script:

 
test> db.getSiblingDB('dcMedications').patients.count()
7


test> db.getSiblingDB('dcMedications').patients.findOne()
{
  _id: ObjectId("650475c70f45ae25ae420b94"),
  firstName: 'ben',
  age: 21,
  lan: 'english',
  patientId: 1234,
  medications: [ 'one', 'Levothyroxine' ]
}
	

Considerations:

  • The execution time of the script can vary depending on the size of your data.
  • To ensure a smooth process, it's advisable to run the script on a standalone MongoDB node, preferably after isolating it from the replica set.
  • This minimizes the potential impact on production resources and data integrity.

Downgrading steps

Note: Before proceeding with the downgrade procedure, ensure that the pre-downgrade steps have been completed.

Set Feature Compatibility Version (FCV) to 6.0

In MongoDB version 7.0 the FCV is included in the confirm: true.

 
db.adminCommand({  getParameter:1,featureCompatibilityVersion: 1})
{ featureCompatibilityVersion: { version: '6.0' }, ok: 1 }

db.adminCommand({ setFeatureCompatibilityVersion: '6.0',confirm: true})
{ ok: 1 }

db.adminCommand({  getParameter:1,featureCompatibilityVersion: 1})
{ featureCompatibilityVersion: { version: '6.0' }, ok: 1 }
	

FYI: If you don't include confirm: true, you will receive an error and will need support assistance to downgrade the binary version.

 
> db.adminCommand( { setFeatureCompatibilityVersion: '6.0' } )

MongoServerError: Once you have downgraded the FCV, if you choose to downgrade the binary version, it will require support assistance. Please re-run this command with 'confirm: true' to acknowledge this and continue with the FCV downgrade.
	

Stop the mongod service

It's recommended to stop the mongod service, which is a crucial step to prevent potential conflicts during the package upgrade or downgrade process.

Follow these commands:

 
# check the status
sudo systemctl status mongod

# stop the mongod
sudo systemctl stop mongod

# check the status
sudo systemctl status mongod
	

Replace the v7.0 packages with the v6.0 latest Packages

Remove the existing mongo packages

There is no option to directly downgrade the packages. So we need to first purge the existing packages and later install the required packages.

 
# Ubuntu
sudo apt-get purge mongodb-org*

# Centos
sudo yum erase $(rpm -qa|grep -i mongodb-org)
	

Install the v6.0 packages

Remove the v7.0-related repository file

  • If any v7.0-related files exist and you attempt to install MongoDB v6.0, they will be overridden by the v7.0 packages.
 
# Ubuntu
rm -rf  /etc/apt/sources.list.d/mongodb-org-7.0.list

# Centos
rm -rf /etc/yum.repos.d/mongodb-org-7.0.repo
	

Update the v6.0 repo and Install

  • Please refer to the installation documentation for MongoDB v6.0 on your specific OS to get the exact updated repository commands (v6.0 installation document).

Validate

Verify the mongod version

 
> mongod  --version

db version v6.0.10
Build Info: {
    "version": "6.0.10",
    "gitVersion": "8e4b5670df9b9fe814e57cb5f3f8ee9407237b5a",
    "openSSLVersion": "OpenSSL 1.1.1f  31 Mar 2020",
    "modules": [],
    "allocator": "tcmalloc",
    "environment": {
        "distmod": "ubuntu2004",
        "distarch": "x86_64",
        "target_arch": "x86_64"
    }
}
	

Verify and update the config and service files

Validate the service file and config file parameters and copy the file which we have already taken as a backup before starting the activity.

 
sudo cp /etc/mongod1.conf /etc/mongod.conf

sudo cp ~/mongod.service /lib/systemd/system/mongod.service 
	

Starting the mongod service

 
# check the status
sudo systemctl status mongod

# stop the mongod
sudo systemctl restart mongod

# check the status
sudo systemctl status mongod
	

Rollback Stage

While dealing with this kind of major activity, In case any issues arise during the downgrade process, having a rollback plan ready is crucial.

Utilize Disk Snapshot

  • Access the disk snapshot created in the pre-downgrade stage.
  • Mount this snapshot on another partition and make necessary updates to configuration and service files.
 
# check the status
systemctl status mongod_V7

# stop the mongod
systemctl start mongod_V7

# check the status
systemctl status mongod_V7
	

Troubleshoot in the Background

  • In the background, you can proceed with troubleshooting.
  • While troubleshooting you can purge or upgrade the existing mongod packages from the default path.
  • It's important to note that even though we've removed the default MongoDB packages, the currently running MongoDB instance won't stop because it's running from the MongoDB packages which are copied to the directory (/home/ubuntu/mongo7/).

Additional Considerations

  • If the mongod process is not running with a service file, ensure it is configured as a service. Use the --shutdown option when stopping mongod for a clean shutdown.
  • Plan and schedule the downgrade activity to minimize downtime.
  • It's recommended to take a snapshot after stopping the mongod service for added safety.

Reference links

As you embark on the journey of MongoDB downgrade, follow this guide diligently to navigate potential challenges and ensure a smooth transition. By adhering to best practices and maintaining a systematic approach, you can confidently manage the downgrade process, maintaining data integrity and system efficiency.

Ready to Safely Downgrade Your MongoDB Version? Our Expert Services Ensure a Smooth Transition from version 7.0 to 6.0.

{{cta}}

No items found.

About the Author

Mydbops

Subscribe Now!

Subscribe here to get exclusive updates on upcoming webinars, meetups, and to receive instant updates on new database technologies.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.