Platform and Data Security
Rock Robotic applies industry best practices to ensure its safety. This document outlines the measures we take to prevent your data from falling into the wrong hands.
Server and Network Security
Rock Robotic hosts the majority of its application on the Amazon Web Services (AWS) platform. AWS provides state-of-the-art data center security that complies with industry standards such as SOC, PCI DSS, and ISO 27001. All physical network and server security responsibility for these parts of the application is delegated to AWS. For more information regarding AWS’s physical security practices, check out their security whitepaper.
All devices used by our staff members are required to be password protected, with automatic locking after short periods of disuse in addition to full-disk encryption. Staff are further required to enable multi-factor authentication (MFA) on all services that provide it.
Much like our physical network security, the majority of our virtual network security is handled by AWS. Amazon provides completely isolated environments where we deploy our applications, and they do so for over one million companies and government organizations across the globe. Some well-known clients who use Amazon to protect their data include NASA, Shell, Autodesk, British Gas, GE, Hitachi, Lafarge, Trimble, and the US State Department. For more information regarding AWS’s virtual network security practices, check out their security whitepaper.
On top of the security outlined above, Rock Robotic implements several other best practice measures on premises and in AWS to further ensure your data’s safety:
Authentication and Authorization
Rock Robotic uses Auth0 to handle our authentication and authorization scheme. Auth0 is ISO27001, ISO27018, SOC 2 Type II, PCI DSS, EU-US Privacy shield, and Gold CSA Star certified. To learn more about Auth0's security platform view their whitepaper.
We maintain all configuration in code to maintain transparency when it comes to underlying network and infrastructure changes. It’s checked into version control and all changes are reviewed for security, scalability, and durability before deployment. We also test all changes thoroughly in a dedicated staging environment before deployment to the production environment.
Rock Robotic's servers are automatically and continuously patched with the latest security updates. This ensures that we are able to minimize our exposure to known vulnerabilities.
Rock Robotic’s servers employ the use of full disk encryption to ensure that data stored on them is not accessible to others. This protects the data both in case of physical access to the servers and when disks are disposed of.
As a web application, code changes are made and deployed in a continuous manner. No action is required to update to the latest version beyond refreshing the browser. In order to maintain application security, code quality, and minimize the introduction of bugs, we follow a strict software development life cycle (SDLC).
Design and Development
New features and changes are thoroughly architected and designed. Once approved, development begins and all code is checked into source control.
All code changes are subject to peer review for quality, performance, and correctness. Code is also reviewed from a security standpoint, adhering to the OWASP top 10 guidelines.
After the review period, changes are deployed into a staging environment where they’re thoroughly tested by developers and our QA team. We perform both feature and regression testing to ensure that changes behave as intended and haven’t introduced errors elsewhere.
When this testing is complete and any required fixes are made, the changes are deployed for customer use. Large or experimental changes may be deployed as beta features for a limited time. Users will have the option to turn these features on or off during this time.
Feedback and bug reports are gathered from customers, and changes and fixes are made as appropriate.
In rare situations where changes must be applied quickly to fix bugs or recover from a security or downtime incident, some of the above steps such as code reviews, testing, and the staging release may be skipped. Should this be the case, all changes will be subject to review and testing once things have returned to normal.
Authentication and Authorization
Rock Robotic's main login method is via email address and password. Your password is never stored in plain text. All passwords are stored with industry leading security at Auth0.
Passwords must also meet two complexity requirements: (1) be at least eight characters long, (2) contain a lower case (a-z), upper case (A-Z), and number (0-9), (3) contain one of the following special characters (!@#$%^&*), (4) not be one of the previous five passwords set, (4) must not contain any part of your personal data, and (5) must not exist in this list of top 10,000 most common passwords.
Should you forget your password, it can be reset by providing the email linked to your account. A link with a cryptographic token is sent to the address provided, allowing you to reset your password.
Single Sign On (SSO)
Users with Google, Facebook, or GitHub accounts may use the respective “Sign in with” buttons to log into Rock Robotic for a single sign on experience without any additional setup.
Multi-Factor Authentication (MFA)
MFA is required when logging in. Additionally, all Rock Robotic employees are required to login with MFA to further ensure data safety.
Once a user is logged in, Rock Robotic stores session tokens as secure, HTTP-only cookies on the user’s browser to identify them as having access. This is further protected through the use of cross site request forgery (CSRF) tokens and strict CORS policies to prevent cross-domain requests and unauthorized use of the cookie. Cookies expire after two weeks, which means users are automatically logged out.
In addition to the code reviews mentioned above, we use several other measures to ensure vulnerabilities are not present in the application.
User inputted data may be stored in a database and re-rendered in the browser. This can open the opportunity for SQL injection and cross site scripting (XSS) attacks. All data entered into databases or rendered to HTML is sanitized appropriately.
Logging and Monitoring
In order to provide an audit trail and to enable quick investigation of potential threats or issues, all requests to Rock Robotic services are securely logged with enough information to recreate events.
Some information that may be logged includes IP addresses, request headers, request payloads, device information, status codes, response times, and failed login attempts. We never log confidential or sensitive data. All logs are retained and backed up for the duration of their usefulness. Additionally, error rates and performance metrics are constantly monitored to ensure you have the best possible end-user experience.
AWS is well known for its stability and reliability. This is backed up by service level agreements that ensure uptime and keep AWS accountable for downtime. Nevertheless, outages can occur in exceptional circumstances.
As such, we’ve designed all of our services to be highly available and failure-resilient. All hosts and applications are constantly monitored for availability. Should a failure occur, an automated fallback is initiated. This strategy mitigates both instance and data center-level failure.
Furthermore, the services comprising the application are monitored using an external tool that monitors uptime from several locations across the globe. If an outage occurs, a technical member of our staff is alerted immediately and a resolution is made as quickly as possible based on the severity of the outage. Rock Robotic endeavors to maintain a 99% yearly uptime.
Since the beginning, we’ve designed the Rock Robotic Platform to meet the large data requirements that come with processing, hosting, analyzing, and serving survey-grade mapping data across the globe. We do this primarily by leveraging AWS’s effectively infinite scalability and global coverage. This, coupled with demand-based, application, and infrastructure-level autoscaling, means that we can quickly and easily scale up our applications, and the underlying infrastructure, to meet any level of demand.
Rock Robotic classifies all user-submitted information as confidential and essential. We use several methods to ensure that customer data is always available and secure:
Data Storage and Transmission
All data submitted to and from the Rock Robotic Platform is encrypted and transmitted securely over HTTPS using TLSv1 or higher, with a 128-bit cipher or higher, depending on the client. All internal transfers of data between Rock Robotic services are similarly protected by encryption.
Once submitted, all data is stored securely in AWS with access controls preventing unauthorized access. S3 buckets are configured to be private by default and access keys or pre-signed URLs are required to access the data. Similarly, databases are password protected and may be accessed from only whitelisted IP addresses. Access and permissions to alter or delete data is only delegated where necessary and based on the principle of least privilege.
All data for processing or display is uploaded to an S3 bucket located in the US-East AWS region. From here the data is transferred, with encryption, to AWS servers in US-East for processing. These servers store this data, with encryption, until it is no longer required for processing.
Any outputs resulting from processing are pushed to an S3 bucket in US-East. Metadata and other data entered into the platform is stored in MySQL databases located in the US-East AWS region.
Where practicable, all data is encrypted at rest, including in queues, databases, data volumes, and S3 using AES256 encryption. Encryption keys are created, managed, and secured using the AWS Key Management Service (KMS). We automatically rotate keys yearly.
In addition to the aforementioned storage level access controls, the Rock Robotic Platform provides user-configurable access controls at an application level. We deny access to the data within a portal or site by default, and require a portal administrator, or another user with the Manage Access permission, to invite a user to view the data within a portal. It’s possible to specify fine-grained permissions when granting access, allowing the principle of least privilege to be applied. To make it easier to manage users and their permissions, they can be grouped into roles that can also be assigned permissions.
In the unlikely event of customer data loss, procedures and safeguards are in place to ensure recovery. The majority of customer data is stored in Amazon’s S3 which is rated to have 99.999999999% durability. On top of this, object versioning is enabled by default on all buckets, which ensures that even if an object is deleted or overwritten, it can be recovered.
The remaining customer data is stored in continuously backed-up databases, providing point-in-time recovery to the minute.
Similarly, all application services are deployed in a highly available manner with automatic recovery from instance and data center-level failures.
In the event of unauthorized access to your data we will notify you as soon as possible after becoming aware of the issue.