Imagine this: You’ve been tasked with finding the best SaaS data protection solution for your company. You might be wondering why you need to worry about backing up your company’s data in the first place. Isn’t that the SaaS provider’s responsibility?
Leading SaaS providers like Salesforce, Microsoft Dynamics 365, and ServiceNow work under a shared responsibility model. This means it’s up to you to protect your mission-critical data, metadata and attachments from potential accidents or data loss–a responsibility that shouldn’t be taken lightly.
The impact of data loss and corruption
Contrary to popular belief, data loss does happen in both SaaS production and sandbox environments. The main culprits are usually human error, integration errors, bad code, and migration errors.
Human errors can occur during the most routine efforts in production environments. One wrong move can cause a cascade delete, creating a ripple effect throughout the entire organization. From the time wasted diagnosing the data loss to time spent restoring the data, it can pose a major disruption to multiple departments (and business operations as a whole). What a mess!
In both sandbox and production environments, data and metadata loss or corruption are quick to happen during transformational projects, such as org merges, org splits, migrations, and application implementations. In these heavy migration instances, it’s easy for bugs and errors to make their way in, creating technical debt, elongating timelines, and increasing costs. Fortunately, all of this is avoidable with the right data protection solution.
Now that you understand the importance of protecting your data, it’s essential to realize that not all solutions are equally effective. In this article, we’ll explore building a backup tool yourself by exporting data through your SaaS provider, versus purchasing a partner solution, like OwnBackup.
Why building your own tool is risky
SaaS providers offer Application Programming Interfaces (APIs) for running programmatic operations on data and metadata. APIs serve as a library of commands for interacting with data, and can essentially build your own backup tool. However, this is not a recommended approach. Building your own backup tool involves many moving parts and unexpected considerations that could lead to a high opportunity cost. You’re better off focusing your company’s valuable resources on your core business than taking the DIY route.
Test questions for building your own tool
The most reasonable way to figure out whether or not you should build vs. buy is to ask the right questions. We call this the “Backup and Recovery Test.” The questions should be the same for every company, but not everyone has the same answers.
Whether you’ve already developed an in-house tool or are investigating the feasibility of doing so, here are the six questions to think about as you consider the best option for your company.
1. Does the tool meet your recovery point objective?
Recovery point objective (RPO) is an indicator of how often a company should back up its data. Typically, companies aim to recover to a point (not more than a day), while backup is usually conducted at least once a day.
With Salesforce for example, you’ll often export data from their API as Weekly Export .CSV files, giving you an RPO of up to seven days.
You can also export data via the API into a SQL database. Depending on your ETL tool, this may or may not pull daily, and could run as often as hourly or every 15 minutes. Be cautious with tools that overwrite previous backups rather than saving each backup individually. Overwriting previous backups can drastically increase your RPO.
2. Does the tool meet your recovery time objective?
Recovery time objective (RTO) is the time frame by which you must restore after data loss or corruption has occurred. The goal here is for companies to calculate how fast they need to recover by preparing in advance. A quick return depends on multiple factors, including your approach to the following recovery tasks:
- Backing up your data: The type of backup you’re performing can significantly impact your RPO and RTO. For incremental and partial backups, point-in-time recovery will require extra data processing. It can also require piecing out-of-place data together or worse, losing it all together.
- Identifying data loss or corruption: How do you find out about data loss? Most in-house tools don’t offer a notification or alert system, which means you’ll find out from users or never find out at all. Never finding out about a data loss or corruption can be detrimental in the long run, impacting critical parts of the business, from forecasts and projections to billing and account management.
- Finding and preparing all the records that were affected: Locating the related child objects, metadata, or attachments can be a nightmare with in-house backup tools that don’t run a comprehensive backup. You’ll either 1) leave out data because you can’t find it or 2) spend hours to weeks piecing together related objects involved in a cascade delete.
- Restoring the lost or corrupted data: Once you’ve identified the last set of valid data from the backups, you’ll have to update/insert the lost or corrupted data back into your provider platform. This will most likely be a manual process involving a data loader or a similar tool, and can also be restored through the API.
The best way to ensure a data protection solution will match your defined RTO is to test various data loss and corruption scenarios in your sandbox.
3. Does the tool allow you to recover data from every point in time precisely like before?
Ensuring data integrity boils down to consistent, daily backups. Unfortunately, most in-house backup methods aren’t equipped with this capability. You’ll have to manually pick and choose objects and fields to back up–which is likely to change daily, monthly, and yearly.
You’ll also need the ability to compare today’s data to historical data quickly. Lack of compare functionality will make it challenging to identify incorrect vs. correct changes to your data. Without these critical data comparison capabilities, your custom-built tool will likely overwrite unaffected data.
Furthermore, maintaining parent-child relationships during restoration can be challenging for a custom-built tool. The reason? They aren’t typically a holistic backup. Given that you cannot set a record’s ID via the SaaS provider’s API, and data relationships are based on the record ID, restoring relationships can become a major pain point.
4. Is the tool secure enough?
Unless you’re a cybersecurity firm, you may not be able to count on an in-house backup tool’s security. Security of data backups is not only crucial for your confidential data, but your customers’ personally identifiable information (PII). Wrongful access to this information could trigger regulatory fines and penalties. Additionally, industry regulations, like HIPAA, SEC 17a-4, and CFR Part 11, require backups to be in place.
If you attempt to build your own tool, make sure that you’re securing your backups with:
- Encryption in transit and at rest to protect data at all times,
- Role-based Access Controls (RBAC) for restriction over who has access to backups,
- IP whitelisting for commanding domain access
- Multi-factor authentication for ensuring only authorized users have access, and
- Single sign-on (SSO) for reducing the number of threat surfaces hackers could access.
5. Can you count on the tool when you need it most?
Your stress-free backup and recovery experience is rooted in automated and dynamic backups, proactive monitoring, and world-class support. It’s almost impossible to achieve all of this when you build your own backup tool. It would require you to:
- Implement auto-discover to support custom objects,
- Keep up with new API versions while supporting metadata, attachments, table-data, Chatter, and custom objects,
- Implement a notification system for when a backup fails or finishes with errors,
- Troubleshoot backup failures and errors,
- Maintain backup runtimes and API consumption, and
- Keep all the related code up-to-date.
6. Does the tool enable you to provide the necessary data accessibility?
So your mission-critical data is temporarily unavailable. Now what? To ensure business continuity during unexpected events, it’s been a long-standing technology best practice that your backups be kept separate from your production data. Backing up your company’s data to an external database, alone, is not enough. The number of people with access will be minimal, and providing access to those you might need the data poses a significant security risk.
Data availability is also an essential requirement of GDPR and CCPA. As part of the regulatory requirements, you need to give Data Subjects full transparency into the data you have about them, including backed-up data. To do this, you need to include search functionality as part of your in-house tool that will allow you to find where a Data Subject’s information is located, including within attachments. Under GDPR and CCPA, you’ll also need to incorporate retention controls that only store data for as long as legally necessary.
Eliminate data downtime with OwnBackup
How did your in-house backup tool fare against the above test? Are you ready to try a better solution?
Data loss and corruption can create disruptions in your business. With OwnBackup, you can proactively protect your SaaS data without losing any down time. Available for Salesforce, Microsoft Dynamics 365, and ServiceNow, OwnBackup Recover provides secure, automated backups and fast, stress-free recovery solutions.