(Please read part one here)
Migrating data to Salesforce can be a pivotal step towards streamlining processes and enhancing efficiency. However, choosing the right migration strategy is crucial to minimize disruptions and ensure a smooth transition. Let’s explore two common approaches – the Big Bang Approach and the Phased Approach (sometimes referred to as Waterfall and Agile Approaches) – along with best practices for implementation and post-migration validation.
Migration Strategies:
Big Bang Approach:
The Big Bang Approach involves migrating all data at once, consolidating efforts, and reducing project costs. By setting a single deadline, businesses can swiftly retire legacy systems, preventing the need to operate two systems simultaneously. However, this approach carries the risk of downtime should issues arise during migration.
Phased Approach:
In contrast, the Phased Approach mitigates the risk of downtime by moving data in stages, segmented by factors such as sales, support, or regions. While more expensive than the Big Bang Approach and having multiple deadlines, this method ensures continuous operation, 24/7.
Best Practices:
Pre-Migration:
During the pre-migration phase, it’s important to address potential performance issues that may arise with large data volumes. To avoid these challenges, consider using Salesforce’s Deferred sharing or Parallel sharing recalculation options, which can be activated by raising a case within your Salesforce organization. Another strategy is to assign the topmost role, such as CEO, to all record owners in Salesforce, which can help avoid sharing calculations during the migration process. Additionally, set the Organization-Wide Default (OWD) sharing settings for all objects to public read/write, disable unnecessary automated processes like triggers, flows, and validation rules, and disable Email Deliverability to prevent the sending of emails to users during migration.
Implementation:
During the implementation phase, it is essential to validate your data to avoid Data skew, as it can significantly impact performance. Data skew can occur in the following scenarios:
- Account Data Skew – When we have a large number of child records (greater than 10k) related to the same account in Salesforce.
- Lookup Data Skew – When a large number of records (greater than 10k) are linked to a single object record through a lookup relationship.
- Ownership Skew – When a large number of records (greater than 10k) are owned by a single user.
It’s important to discuss these issues with stakeholders and distribute data uniformly to avoid problems like record locking.
Finalizing the migration sequence is also vital: ensure that all data-owning users are included before proceeding with the rest of the migration to prevent duplicate work. For example, when migrating sales data, the logical order could be Users, Products, PriceBooks, Leads, Accounts, Contacts, Opportunities, Opportunity Products, Quotes, and Quote Line Items.
Furthermore, for finalizing the API, it is recommended to utilize Bulk API 2.0 when dealing with large data volumes to expedite operations. These APIs are specifically crafted to manage massive datasets. Running data migration scripts in different sandboxes is also a recommended approach. For instance, running a 10% data load in the QA sandbox can help finalize migration scripts, a 50% data load in the Partial copy sandbox can validate data against a large data volume, and a 100% data load in the UAT/full copy sandbox can avoid surprises during production data migration, helping to communicate correct timelines to stakeholders.
Post-Migration:
After the implementation is completed, it’s crucial to update the Organization-Wide Default (OWD) settings as per your requirements. Run deferred sharing rules or assign correct roles to record owners in Salesforce to ensure proper access and security. Enable all automated processes and email deliverability to restore full functionality. To review the success of the data migration, validate that all data has been migrated in the correct format and that relationships are accurately reflected in your org. This validation can be done through manual testing or automation tools. Collaborate with stakeholders to create test cases for data migration testing, ensuring that User Acceptance Testing (UAT) considers these against business goals. Finally, create reports and compare them against legacy data to confirm the accuracy of the migration.
In conclusion, Salesforce data migration is a critical process that requires careful planning and execution. Whether adopting the Big Bang Approach or the Phased Approach, implementing best practices throughout the pre-migration, implementation, and post-migration phases is essential for a successful transition. By prioritizing data integrity, performance optimization, and stakeholder collaboration, organizations can leverage Salesforce to its fullest potential, driving efficiency and maximizing business outcomes.
Salesforce Lead Architect
Pratik Rudrakshe is a leading Salesforce Lead Architect at Palladin Technologies, with 17 Salesforce certifications underscoring his expertise. He has introduced 4 managed packages on the Appexchange and implemented a customer-facing journey in insurance using Vlocity Insurance, OmniOut, LWC, and the Newport Design System (NDS). Certified in Salesforce Industries/Vlocity CPQ and proficient in Vlocity and Salesforce LWC, Pratik also holds a Copado certification, highlighting his CI/CD process skills. His specialties include Salesforce Industry/Vlocity CPQ (Communications & Insurance), EPC, ESM, Contract Management, Vlocity LWC, Salesforce Lightning (LWC & Aura), Experience/Community Cloud, Salesforce Health Cloud, NPSP (Nonprofits Cloud), Integrations, CICD, Triggers, Apex, Visualforce, CPQ, Order Management, CTI Integration, JavaScript, CSS, and jQuery. Pratik’s innovative solutions and deep expertise consistently drive exceptional outcomes for clients.