Google OAuth Review Process – for Restricted Scopes

What is OAuth ?

OAuth (Open Authorization) is an open standard authorization framework for token-based authorization on the internet. It enables an end user’s account information to be used by third-party services, such as Facebook and Google, without exposing the user’s account credentials to the third party.

Google OAuth Review Process

You are likely to receive an email as depicted here if you are an API developer.
The process can be broadly divided into two phases:

1. The OAuth review process
2. The security assessment

If your app accesses Gmail’s restricted scopes, you have to go through both these phases. More details here

1. The OAuth review process

It starts with initiating the review process on your Google Developer Console. You will have to go through a questionnaire which is mostly about helping Google understand the usage of the restricted scopes in your app. You only have to do this for the production version of the app. Lower environments can be marked as “internal” and they need not go through this process.

After you initiate the review, Google’s security team will reach out to you requesting a YouTube video to demonstrate the usage of restricted scopes in your app. Once you share the video, Google will either respond with an approval or a feedback email requesting more information/changes. We had some feedback from Google and we had to share a couple of videos before we got an approval from Google.

Listed below are a few pointers which might help you to reduce feedback from Google.

Google usually takes a long time to respond in this regard. Despite multiple follow ups we had to wait for a month or two to get response for some of these emails – Possibly because they had a lot of requests from app developers during that time.
Also, in general we felt there was some disconnect in their responses as it looked like every response from our end was reviewed by a different person at Google – we received an email stating that we have missed the deadline for initiating the security assessment weeks after we had initiated the process. However, Google did acknowledge the mistake on their end after we responded with the SOW that was already executed.

  • Follow the design guidelines given by Google for styling the sign in button https://developers.google.com/identity/branding-guidelines#top_of_page
  • Have a web page for your app which people can access externally, without having to sign in.
  • Ensure that users have access to your Privacy policy page from your home page. A link to this should be given on sign in and users should only be allowed to proceed on accepting the privacy policy.
  1. While recording the video, go through the “privacy policy” on sign in and demonstrate that users need to accept it before proceeding.
  2. Your policy should explicitly quote the use of all restricted scopes.
  3. The policy should also mention how and why the restricted scopes are being used. Who has access to this data and where is it stored? Can it be viewed by your support staff or it’s just used by the app and humans cannot access it.
  • While recording the video try to capture as much details as possible demonstrate the usage of Google’s restricted scope within your app.
  1. Code walkthrough wherever necessary Ex. Fetching OAuth token and its use
  2. Demonstrate the storage of sensitive data and usage of encryption

If Google is satisfied with all the details about your app and is convinced that your project is compliant with their policies, you will get an approval mail. You will also be informed if your app has to undergo a security assessment as depicted.

2. Security Assessment

The security assessment phase relatively involved more live discussions and meetings with the assessors and therefore the overall process is quicker. You have a dedicated team assigned to help you. Google gave us the contacts of 2 third-party security assessors.We reached out to both of them and felt that ‘Leviathan’ was better in terms of communication. They shared more information about the overall process and we were more comfortable going ahead with them.We had to fill in and sign a few documents before we got started, which involved

  • Filling up an SAQ(Self assessment questionnaire) – to understand about the app and the infrastructure.
  • Signing the SOW
  • Signing a mutual NDA

After which we made the payment and got started with the process. We had an initial introduction meeting where we were introduced to their team and our assessment process was scheduled. To give you a rough idea, our schedule was about 2 months after we had the initial discussions.As per the SOW, the assessment would include the following targets. These would possibly differ based on individual applications and the usage of the restricted scopes. For reference, our’s was an iOS app.

  • Website
  • RESTful APIs
  • Mobile Application (iOS)
  • External Facing Network
  • Developer Infrastructure
  • Policy & Procedure Documentation

The assessor would retest after we complete resolving all the vulnerabilities. The first retest is included in the SOW and additional retests are chargeable.The timeline we had before Google’s deadline was pretty tight and we wanted to understand from the assessor if we can do anything to increase our chances of getting it right on the first pass. The assessors were kind enough to share details about some of the tools they use for the penetration testing so that we could execute them ahead to understand where we stand and resolve as much as possible before the actual schedule.

Preparation for the assessment

As part of preparation for the assessment, you can use these tools which help you identify the vulnerabilities with your application and infrastructure. Also, ensuring that you have some basic policy documentation will save you some time.

Scoutsuite – It’s an open source multi-cloud security-auditing tool. You can execute this on your infrastructure. It will generate a report listing out all the vulnerabilities. Resolving as many as you can before the assessment would surely help.

Burpsuite – Burpsuite is not open source but you can either buy it or use the trial version. It’s a vulnerability scanner which scans all the API endpoints for security vulnerabilities. Executing Burpsuite and taking care of vulnerabilities marked as High or more will help significantly before going through the assessment. It’s recommended to run Burpsuite on your lower environments and NOT on production because Burpsuite tests every endpoint by calling it more than a thousand times. You will end up creating a lot of junk data on whichever environment you run Burpsuite on.

Policy Documentation – We were asked to share a whole set of documents before the assessment. We already had most of these documentations in place so it was not a problem for us. But, if you don’t have any documentation for your project, it would save some time to have some basic documentation for your project as a preparation. I have listed out a few here:

  • Software Development Guidelines
  • Network diagrams
  • Information security policy
  • Risk assessment policy
  • Incident response plan

We reached out to both of them and felt that ‘Leviathan’ was better in terms of communication. They shared more information about the overall process and we were more comfortable going ahead with them.

We had to fill in and sign a few documents before we got started, which involved

  • Filling up an SAQ(Self assessment questionnaire) – to understand about the app and the infrastructure.
  • Signing the SOW
  • Signing a mutual NDA

 

After which we made the payment and got started with the process. We had an initial introduction meeting where we were introduced to their team and our assessment process was scheduled. To give you a rough idea, our schedule was about 2 months after we had the initial discussions.

As per the SOW, the assessment would include the following targets. These would possibly differ based on individual applications and the usage of the restricted scopes. For reference, our’s was an iOS app.

  • Website
  • RESTful APIs
  • Mobile Application (iOS)
  • External Facing Network
  • Developer Infrastructure
  • Policy & Procedure Documentation

The assessor would retest after we complete resolving all the vulnerabilities. The first retest is included in the SOW and additional retests are chargeable.

The timeline we had before Google’s deadline was pretty tight and we wanted to understand from the assessor if we can do anything to increase our chances of getting it right on the first pass. The assessors were kind enough to share details about some of the tools they use for the penetration testing so that we could execute them ahead to understand where we stand and resolve as much as possible before the actual schedule.

Actual penetration testing from the assessor

The assessor initiated the process as per the schedule. The first thing they did was create a slack channel for communication with our team and theirs. We had to share with them the AppStore links, website details and necessary credentials for our infrastructure. They also shared a sharepoint folder for sharing all the documentation and reports. We started uploading all the necessary documents and in parallel they started the penetration testing and reviewing our infrastructure. Again, do NOT share the production environment for penetration testing as it will create a lot of junk data and may delete existing entities.

After two days of testing they shared an intermediate report and we started addressing the vulnerabilities. After about a week we got the final report of the vulnerabilities. We addressed all the vulnerabilities and shared the final report. Here are a few remediations that were suggested for us:

  • We had to add Contact details for users in our web page to report vulnerabilities
  • Enable Multi Factor authentication on our AWS logins
  • Requested for logs around Google OAuth token usage
  • Encryption on RDS, EBS volumes
  • Documentation demonstrating KMS(Key management system) usage.

Upon completion of the assessment, the assessor will provide a document containing the following components:

  • Executive summary, including a high-level summary of the analysis and findings and prioritized recommendations for remediation
  • A brief description of assessment methodologies.
  • A detailed discussion of analysis results, including relevant findings, risk levels, and recommended corrective action.
  • Appendices with relevant raw data, output, and reports from the analysis tools used during the engagement.

That was the end. Couple of days after the approval from the assessor, we got an approval email from Google.

How can we help you?

A Brief Overview of Quality Assurance and Testing

Quality Engineering or Software Quality Testing

Business success is achieved through the combined efforts of different teams working in cohesion within an organization. This success is a directly related to the individual success of each team and their roles.

A software product’s success also goes through the phases similar to those of an organization and each and every step – from conceptualization to release is essential and crucial towards its success. Quality Engineering or Software Quality Testing is one such crucial phase, however, sometimes it can be the most commonly disregarded and undervalued part of the development process.

We, here at Tarams – have a high regard towards quality engineering, and we believe the effort associated with testing is a justified investment and can ensure stability and reduce overall costs from buggy, poorly executed software. Highly qualified & intuitive quality testing engineers, who form the core of our team are well versed in different approaches of testing to further strengthen our resolve towards delivering healthy and error free software products.

This document explains in brief, the challenges faced during testing and our techniques to overcome them to deliver a high quality product.

Testing Life Cycle

A successful software product requires it to be tested thoroughly and consistently. At Tarams, we involve the Quality Engineering (QE) teams as early as the design phase. Our test architects start by reviewing the proposed software architecture and designs. They set up the test plans and test processes based on the architecture and technologies involved.

We emphasize using ‘Agile Development Methodology’. This methodology involves small and rapid iterations of software design, build, and test recurring on a continuous basis, supported by on-going planning. Simply put, test activities happen on an iterative, continuous basis within this development approach.

The above diagrams depicts the standard development life cycle. Quality Assurance (QA) through QE is involved in all the phases while, tailoring the main activities within the context of the system and the project is performed accordingly.

The stages below showcase the efforts towards ensuring quality of the product:

Test Planning

Test planning involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed. Test plans may be revisited based on feedback from monitoring and control activities. At Tarams, our QA teams prepare the test plan and test strategy documents during this phase, which outlines the testing policies for the project.

Test Analysis

During test analysis, the business requirements are analyzed to identify testable features and define associated test conditions. In other words, test analysis determines “what to test” in terms of measurable coverage criteria. The identification of defects during test analysis is an important potential benefit, especially where no other review process is being used and/or the test process is closely connected with the review process. Such test analysis activities not only verify whether the requirements are consistent, properly expressed, and complete, but also validate whether the requirements properly capture customer, user, and other stakeholder needs.

Test Design

During test design, the test conditions are elaborated into high-level test cases, sets of high-level test cases, and other testware. So, while test analysis answers the question – “what to test?”, test design answers the question “how to test?”. As with test analysis, test design may also result in the identification of similar types of defects in the test basis. Also as with test analysis, the identification of defects during test design is an important potential benefit.

Test Implementation

During test implementation, the testware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures in test management tools such as Zephyr, QMetry, TestRail etc. Test design and test implementation tasks are often combined. In exploratory testing and other types of experience-based testing, test design and implementation may occur, and may be documented, as part of test execution.

Test Execution

During test execution, test suites are run in accordance with the test execution schedule.

Test execution includes the following major activities:  

  1. Recording the IDs and versions of the test item(s) or test object, test tool(s), and testware
  2. Executing tests either manually or by using test execution tools
  3. Comparing actual results with expected results, analyzing anomalies to establish their likely causes (e.g., failures may occur due to defects in the code, but false positives also may occur
  4. Reporting defects based on the failures observed
  5. Logging the outcome of test execution
  6. Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test results

Test Completion

Test completion activities collect data from completed test activities to consolidate experience, testware, and any other relevant information. In the test completion phase, the QA team prepares the QA sign-off document, indicating if the release can be made to production, along with supporting data(for example test execution, defects found in release, open and closed defects, defects priority etc.).

Manual Testing

Manual testing is a ‘Software Testing Process’ in which test cases are executed manually without using any automated tool. Manual Testing is one of the most fundamental testing processes as it can find both visible and hidden defects of the software. This type of testing is mandatory for every newly developed software before automated testing. This testing requires great efforts and time, but it gives the surety of bug-free software.

The QA teams at Tarams starts testing either when testable (something which can be independently tested) part of the entire requirement is developed or when the entire requirement is developed. The first round of testing happens on small feature parts as they are ready, followed by an end-to-end testing round on another environment once all requirements are developed.

Mentioned below is an overview of the different testing approaches used at Tarams

Regression Testing

Software maintenance is an activity which includes enhancements, error corrections, optimization and deletion of existing features. These modifications may cause the system to work incorrectly. Therefore, Regression Testing is implemented to solve the problem. Regression test covers the end to end business use cases, along with edge use cases which may break application functionality if untested.

On every release, the QA team executes the regression test suite on the respective build manually, after having completed the testing for release items. QA team prepares the test execution report for each release. As the project grows in stability, we plan to automate these tests and get them executed as part of every build, and also plan to include that in the continuous integration pipeline.

Compatibility Testing

A mark of a good software is measured by how well it performs on a plethora of platforms. To avoid shipping a defective product which has not been tested rigorously on different devices the QA process will make sure that all features work properly across a combination of various devices, Operating Systems & Browsers.

This involves testing not only on different platforms but also on different versions of the same platform. This also includes the verification of backward compatibility of the platform.

Verification of forward & backward compatibility on different platform versions is smooth till the QA runs out of physical devices to test the product, this poses one of the major threats to the quality of any software as the device inventory cannot always be kept up-to-date with an ever increasing device models in the market.

This problem is overcome by looking into the usage analytics to comprehend all the platforms / browsers / devices used to access the product and using a premium cloud service such as SauceLabs to perform the testing. Both these services provide a virtual and physical device access for testing. However, there are some limitations that are inherent with the device farms such as – testing applications with video/audio playback functionalities, video/audio recording, lag in the actions and the responses over the network.

Whenever there are updates made to APIs, in the case of mobile applications QA team tests the older versions of the mobile application to ensure that those are also working smoothly with the updates in the API.

Performance Testing

Performance testing is a form of software testing that focuses on how a running system performs under a particular load. This is not about finding software bugs or defects. Performance testing is measured according to benchmarks and standards.

As soon as several features are working, the first load tests should be conducted by the quality assurance team. From that point forward, performance testing should be part of the regular testing routine each day for each build of the software.

Our QA teams have performed performance testing for a B2C mobile application which consisted of buying and getting an item delivered at doorstep. The major functionalities of the application were to search for a product across stores and be able to place an order for a product and get it delivered. While the delivery executive is on their way to deliver the product, the customer can track the delivery.

The following performance aspects were tested for the project

  • API/Server response
  • Network performance – under different bandwidths like WiFi, 4G, 3G
  • A range of reports is configured to be generated post the build runs, like, Aggregate graphs, Graph results, Response time, Tree results & Summary report.

We leverage the inbuilt performance analyzer in XCode (Instruments) and can also enable monitoring in ‘New Relic’.

Machine Learning Models Testing

Machine Learning (models) represents a class of software that learns from a given set of data and then makes predictions on the new data set based on its learning. The usage of the word “testing ” in relation to Machine Learning models is primarily used for testing the model performance in terms of accuracy/precision of the model. It can be noted that the word “testing” means different for conventional software development and Machine Learning models development.

Our QA team has been working on a B2C product discovery application, where all the purchases made by a user from multiple stores gets discovered and displayed on the application. There are multiple applications of machine learning in the application for the following aspects –

  1. Product recommendation
  2. Product Categorization
  3. Product Deduplication

When there are any failures in QA results where certain data couldn’t be successfully processed, that set of data is fed into the machine learning model with appropriate details. For example, if the system couldn’t categorize certain products, then the product details are fed into the machine learning model, so as to enrich the model in future categorizations.

Data Analytics Testing

Data Analytics (DA) is the process of examining data sets in order to draw conclusions about the information they contain. Data analytics techniques can reveal trends and metrics that would otherwise be lost in the mass of information. This information can then be used to optimize processes to increase the overall efficiency of a business or system.

The QA (with the help of developers) performs testing of the app to make sure that all the scenarios have sufficient analytics around them and capture accurate data. This user behavior data will be the basis for major product decisions around growth, engagement etc. This will also come in handy in debugging certain scenarios.

One of our projects that had the ‘Firebase Analytics’ implemented captured the user events on each page. The data gathered was then segregated and analysed to find the usage patterns to make the product better.

Automation Testing

Automated testing differs from manual testing by the simple difference of testing being done through an automation tool. In this form of testing, lesser time is needed in exploratory tests and more time is needed in maintaining test scripts while increasing overall test coverage.

As discussed earlier, the size of a regression test suite would be exhaustively large once the product achieved optimal stability. Manually executing the regression tests at this stage consumes a considerable amount of time & resources. To solve this problem we often look towards automating the testing process and inturn Automation Testing

Our automation design follows the below process

Test Tools Selection

The right ‘Test Tool’ selection, largely depends on the technology the ‘Application Under Test’ is built on. So here at Tarams, a thorough proof of concept is conducted before selecting the automation tool conclusively.

We have used Selenium to automate the testing of multiple web applications, while using different languages such as Java, Python, TypeScript etc.

Planning, Design & Development

After selecting a tool for automation, the QA moves towards planning the specifics required for implementation such as – Designing the Test framework, Test scripts, Test bed preparation, Schedule / Timeline of scripting & execution and the deliverables.

This phase also includes the QA sorting the test suite to find all the automation candidates that will eventually be automated. In some of the projects the QA team has achieved test automation coverage of approximately 70%.

Test Execution

Once automation test scripts are ready, they are added into the automation suite for execution using Jenkins on cloud devices or the Selenium grid while a collective report with the detailed execution status is generated.

The generation of automation reports is done by the tool itself or using some external libraries like ‘Extent Report’. This is a continuous process of developing and executing test cases.

Maintenance

As new functionalities are added to the System Under Test with successive cycles, Automation Scripts need to be added, reviewed and maintained for each release cycle. The process of updating the automation code to be relevant with application changes consumes around 5-10% of QA bandwidth on average.

Architecture

Our QA teams have developed generic automation framework, that can be used across multiple projects for Selenium automation. The framework is versatile in handling different possible exceptions and failures, at the same time provides the capability to connect with APIs of multiple external systems to be able to compare the data across the systems. Below are a few outlining functionalities of our test framework,

  • The framework is designed to generate any test data that may be required while automating the test.
  • Abstract reusable methods readily available to be implemented in any project.
  • Extendable to add any new features in the future if necessary.
  • Easy to read HTML test reports
  • Automated test status updation in test management tool

API Testing

While developers tend to test only the functionalities they are working on, testers are in charge of testing both individual functionalities and a series or chain of functionalities, discovering how they work together from end to end.

The re-usable API test harness which has been designed from ground-up can also be used while testing the front end, since Selenium library can only automate the UI, it creates a challenge where we need to fetch data from an external source.

API tests are introduced in the early stage of checking staging and dev environments. It’s important to start them as soon as possible to ensure that both endpoints and the values they return are displayed properly.

The QA uses several tools to verify the performance & functionality of the API’s such as Postman tool, RestAssured java library or pure java implementation of http methods.

Some of the tests performed on API are,

  • Functionality Testing — the API works and does exactly what it’s supposed to do.
  • Reliability Testing — the API can be consistently connected to and lead to consistent results
  • Load Testing — the API can handle a large amount of calls

QA in Production

Quality assurance team doesn’t end their responsibility with pre-release testing and release. The QA team keeps a close eye on the software running in production.

Since an application can be used by hundreds of thousands of users in vastly different environments and since there are a multitude of 3rd party integrations in-play, it is very critical to identify field issues and replicate them in house at the earliest.

Also, the usage statistics generated in production is used by the QA to enhance the test scenarios and check for extra use-cases which should be added to the test suite.

Test Data Management

There are different types of data required for effectively testing any software product. Effective management of test data plays a vital role in the testing of any application. This is critical in ensuring that testing is performed with the right set of data; and in ensuring that the testing time is well managed by pre-defining / storing / cloning test data. While data that does not have external dependencies are easier to generate/mock with the help of certain scripts, the other types of data are harder to generate.

Wherever possible, Tarams manages to get test data directly from the production by taking a dump of the database and using it as test data. Since some of the production databases may contain sensitive user information, we focus on data-security and ensure the data is not compromised.

Test Environments

Testing is primarily performed in QA and PROD environments. For stress / load testing, we use the STAGING environment which is a perfect replica of the production in it’s infrastructure.

Once a build is found to meet the expectations for the release, the build is then deployed on the next higher environment. Different environments are required for testing, so as to ensure that the activities in one of the environments doesn’t impact the data or the test environment required for other activities; for example, we need to ensure that the stress/load testing doesn’t impact the environment required to perform the functional testing of the application.

Source Code Management (SCM)

SCM allows us to track our code changes and also check the revision history of the code which is used when you need to roll back the changes. With Source Code Management, both the developers & the QA pushes the code into a cloud repository such as GitHub or on-premise servers such as Bitbucket.

Troubleshooting becomes easy as you know who made the changes and what were those changes. Source Code Management helps streamline the software development process and provides a centralized source for your code.

SCM is started as soon as the project is initiated from the point of initial commit till the application is fully functional with regular maintenance.

Continuous Integration

As the code-base grows larger, adding extra functional plugs raise the threat of breaking the entire system. This problem has been overcome by the introduction of ‘Continuous Integration (CI)’. With every push of the code, the CI tool such as Jenkins triggers an automation build to run smoke tests; which help in detecting errors if any, early in the process.

The QA also has several scheduled automation triggers which are configured and run according to requirements. The CI process will ensure that the code is deployable at any point or even automatically releasing to production if the latest version passes all automated tests.

Listed below are some of the advantages of having Continuous Integration:

  1. Reduces the risk of detecting bugs once the code is deployed to production
  2. Better communication when sharing a code to achieve more visibility and collaboration
  3. Faster iterations; as we release code often which reduces the gap between the application in production and the one the developer is working on will be much smaller

Conclusion

This paper gives a brief overview of our efforts in delivering high-quality software products through rigorous levels of testing in parallel with our development efforts.

Our QA expertise in – manual testing (full stack), End-to-End test automation, API automation and performance testing for both mobile and web applications, enhances the efficiency of the products while keeping the user in mind.

Authors

Chethan Ramesh

A Senior QA Engineer at Tarams with over 7 years of experience in full stack testing, and automation testing. Chethan has been associated with Tarams for more than 2 and a half years.

Pushpak Bhattacharjee

Pushpak Bhattacharjee is a QA manager at Tarams with over 9 and a half years of experience in full stack testing and automation testing and has been associated with Tarams for more than 2 and a half years now.

How can we help you?

Applications of Big Data Analytics in real life

Today, millions of users click pictures, make videos, send texts, and communicate with each other through various platforms. This results in a huge amount of data that is being produced, used and re-used everyday.

In 2013, the total amount of data was 4.4 zettabytes. This is likely to increase towards 44 zettabytes by 2020 (One zettabyte is equivalent to 44 trillion gigabytes)

All of this ‘Data’ is a precious resource, which can be harnessed and understood by deploying certain techniques and tools. This is the gist of Big Data and Data Analytics. Using Big Data and Data Analytics, many organizations are able to gain insights into the customer mindsets, trending topics, imminent next Big things, etc.,

Let us take a look at how Big Data Applications has influenced various industries and sectors, and also the ways in which they are benefited from the same.

Education

The education industry is required to upkeep and maintain, a significant amount of data regarding faculties, courses, students and results. Requisite analysis of this data can yield insights that enhance the operational efficiency of the educational institutions. This can be put to avail in numerous ways.

Based upon a student’s learning history, customized schemes can be put into place for him/her. This would enhance the student results in entirety. Similarly, the course material too can be reframed based upon what students learn quicker, and the components of the course material that are easier to grasp. As a student’s progress, interests, strengths, and weaknesses are grasped in an improved manner, it helps suggest career paths most lucrative for him.

Healthcare

Healthcare industry generates a significant amount of data and Big Data helps the industry make a prediction for epidemic outbreaks in advance. It may also help postulate preventive measures for such a scenario.

Big Data may help with the prediction of disorders at an early stage, which can act as a preventive measure against any further deterioration, and makes the treatment more effective as well.

Government

Governments of all nations come across a significant amount of data every day, as enabled by sources such as the various databases pertaining to their citizens and geographical surveys.

By putting Big Data Analytics to the best avail, the Governments can come to recognize the areas that are in need of immediate attention. Similarly, challenges such as exploitation of energy resources and unemployment could be dealt with better. Centering down upon tax evaders and recognizing deceit becomes easier as well. Big Data also makes occurrences of food-based infections easier to determine, presume, and work upon.

Transportation

There are various ways in which Big Data makes transportation more efficient and easier, and the technology withholds a vast potential in the field.

As an example, Big Data can be used to access commuters’ requirements of different routes and can help implement route planning which reduces the waiting times. Similarly, traffic congestion and patterns can be predicted in advance, and accident-prone areas can be identified and worked upon in a suitable manner.

Uber is a brand that puts Big Data Analytics to avail. They generate the data about their vehicles, each trip it makes, the locations and drivers. This can be used for making predictions about the demand and availability of cabs over a certain area.

Banking

Data in the banking sectors are huge and enhances each day. With a proper analysis of the same, it is possible to detect fraudulent activities such as misuse of debit or credit cards or money laundering. Big Data Analytics help with risk mitigation and bring business clarity.

As an example, Bank of America has been using SAS AML for over the past 25 years. The software is based upon data Analytics and is intended towards analysing customer data and identifying suspicious transactions.

Weather patterns

Weather satellites and sensors are located across the globe and collect a significant amount of data, which is then used to keep a tab on weather and environmental conditions as well. By use of Big Data Analytics, the data can be used for weather forecast and understanding the patterns of natural disasters in a better way. It can also come across as a resource for studying global warming.

The Governments can put in efforts in advance towards preparing themselves in the event of a crisis. It may even help determine the metrics related to the availability of drinking water across geographies.

Media and entertainment

People own and have access to digital gadgets that they use to stream, view, and download videos and entertainment based applications. This significant amount of data generated can be harnessed and some of the prime advantages that can be derived from putting this data to the best possible avail involve making a prediction of audience taste and preferences in advance. This can be further used towards making sure that scheduling of media streams is optimized or on-demand.

The data can also be used to study customer reviews and figuring out the factors that don’t delight them. Targeting advertisements over media become easier as well.

As an example, Spotify is a provider of on-demand music and uses Big Data Analytics to analyse data collected from the users across the globe. The data is then used to give some fine recommendations for a user to choose from. This is based upon the user’s browsing history and the most preferred videos seen by users of the same geographical region or the same demographics.

In terms of Big Data, it is important that the organizations are able to use the data collected to their best advantage in order to gain a competitive advantage. Merely a collection of the data is not enough.

In order to ensure efficient use of Big Data, Big Data solutions make the analysis easier. Application of Big Data expands further still to fields such as aerospace, agriculture, sports and athletics, retail and e-commerce.

How can we help you?