Sunday, June 23, 2019

REST API Security: Pen Tests

Security tests ensure that APIs are secure from external threats and protected from potential vulnerabilities, as discussed in one of my previous posts. The primary focus of API security tests and security testers is finding the vulnerabilities of the API that they intend to test by running penetration tests, fuzz tests, validations, sensitive data exposure determination, and so on.
This quick read discusses the importance of pen tests, stages of its lifecycle, and testing methods.

Penetration (Pen) Tests

One of the imperatives in API testing strategy is penetration testing. Pen tests are a process in the cyber-attack simulation against a system or API that exposes/determines exploitable vulnerabilities such as intra-network loopholes, XSS attacks, SQL injections, code injection attacks, and so on.
Pen tests asses the threat vector from the external standpoint, such as supported functions, available resources, and APIs internal components as well.

Importance of Penetration Tests

  • No compromise to data privacy
  • Guaranteed and secured financial transactions and financial data over the network
  • Discover security vulnerabilities, and loopholes in the APIs in underlying systems.
  • Simulate, forecast, understand, and assess impacts due to attacks.
  • Make APIs as fully information security compliant

PenTest Lifecycle

Having a good understanding of the causes of vulnerabilities from the earlier section is extremely important. Now, let’s get into the five different stages of pen tests, as shown below.
Image title
The preceding diagram depicts the lifecycle of pen tests, involving five phases of activities such as preparation, scanning, gaining and maintaining access, and reporting.

Preparation, Planning, and Reconnaissance

The first phase of the lifecycle involves two parts:
  • Scope definitions define the goals of the tests to be carried out and the testing methods and systems to be addressed
  • Gathering intelligence, such as a domain, endpoints, and understanding how the target APIs works along with its exposure to vulnerabilities

Scanning

Understanding the target application response to various intrusion attempts by static and dynamic analysis is the focus of the scanning phase.

Gaining Access

Attempts to uncover API vulnerabilities by application attacks such as XSS (cross-site scripting), SQL injections, code injections, and backdoors. Once those vulnerabilities are uncovered, then exploiting those by privilege escalations, data stealing methods, and traffic interceptions are part of the gaining access scope and also assess the damage that API vulnerability could cause.

Maintaining Access

By establishing an illicit, long-term presence in the network, intruders may cause irreversible damages to the systems as they may present in the system for a long-term facilitates highly sensitive data mining (especially on government, military, and financial networks) in a steady, well researched, and meticulously planned attack.
Assess the long-term presence abilities and chances of gaining in-depth access to the systems/APIs — this is the primary intention of the maintaining access phase.

Analysis

The final phase of the lifecycle focus is to compile and present the results of penetration tests as a report. The report generally contains a specific vulnerability that was exploited as part of pen tests, details of compromised/accessed sensitive data as part of the pen test exercise, and most importantly, the duration of the time that one was able to remain in the system undetected. These results and reports will act as a feed/input to the security configurations across the organization to prevent any future attacks.
Hope this short read has provided a good understanding of pen tests and its lifecycle. Though there are many out-of-the-box tools available on the market to run pen tests for our APIs, it's important that one understands what pen tests are and why they are one of the key elements of an API testing strategy.
Stay tuned! In the next post, we will look at the different types of penetration tests.

Sunday, June 9, 2019

Common Causes of REST API Security Vulnerabilities

Learn more about common REST API security vulnerabilities and what causes them.

As part of this article on REST API security vulnerabilities, we have gone through a few types of vulnerabilities, and with this week’s post, we observe a few common concerns or causes that make our APIs vulnerable to various attacks. Also, at the end of this article, we will quickly dive into security tests that help expose vulnerabilities as part of regression tests.

API Design and Development Flaws 

Missing or non-adhering of API security principles and best practices may lead to defects that expose the business-critical data. As another aspect of design and development, we need to keep the APIs as simple as possible (less intricate) as complexity may lead to less coverage and being vulnerable. Inadequate user input validation, SQL injection loopholes, buffer overflows are a few other causes. Understanding and implementing various aspects of design strategies and RESTful API design practices in APIs helps reducing design and development flaws to a greater extent.

Poor System Configuration

Not necessarily the best design and development is enough to safeguard the system if the system configurations (where the APIs are) do not adhere to security compliance. This introduces loopholes that attackers use to steal the information.

Human Errors

Non-adherence of organization security compliance, inadequate knowledge of security measures such as documents shredding policies, secure coding practices, choosing passwords, maintaining the confidentiality of passwords, periodical resets of passwords, no access to the unknown/unsecured sites create loopholes in the API, and more lead to security breaches.

Internal and External Connectivity

APIs are the part of unsecured internal and external network connectivity is one another major causes of the vulnerability. Also, due to APIs exposure to large and unique channels like mobile networks, poor risk management, and lenient authorization practices within the network are a few to list for this category.
So, how do we find the vulnerabilities in APIs? APIs should go through security tests.

Security Tests

Security tests ensure that APIs are secure from external threats and protected from the vulnerabilities that we have discussed above. The primary focus of the API security tests and security testers is finding the vulnerabilities of the API they intend to test by running penetration tests, fuzz tests, validations, and sensitive data exposure determination. 
Security functional testing and security vulnerability testing are the two categories of security tests. The functional test is to execute manual tests and manual checks for the presence of security mechanisms within APIs implementation. The security vulnerability tests intend to execute automated test cases that may expose vulnerabilities.
So, with a rigorous run of security tests over APIs, one can expose all possible vulnerabilities in a great time, get them fixed, and protect the APIs from those potential vulnerabilities.

Sunday, June 2, 2019

RESTful API Design Principle: Deciding Levels of Granularity

Granularity is an essential principle of REST API design. As we understand, business functions divided into many small actions are fine-grained, and business functions divided into large operations are coarse-grained.
However, discussions about what level of granularity that needs to be in APIs may vary, we will get distinct suggestions, and even end up in debates. Regardless, its best to decide based upon business functions and its use cases, as granularity decisions would undoubtedly vary on a case by case basis.
This article discusses a few points on how API designers would need to choose their RESTful service granularity levels.

Coarse-Grained and Fine-Grained APIs

In some cases, calls across the network may be expensive, so to minimize them, coarse-grained APIs may be the best fit, as each request from the client forces lot of work at the server side, and in fine-grained, many calls are required to do the same amount of work at the client side.
Example: Consider a service returns customer orders in a single call. In case of fine-grained, it returns only the customer IDs, and for each customer id, the client needs to make an additional request to get details, so n+1 calls need to be made by the clients. It may be expensive round trips regarding its performance and response times over the network.
In a few other cases, APIs should be designed at the lowest practical level of granularity, because combining them is possible and allowed in ways that they suit the customer needs.
Example: An electronic form submission may need to collect addresses as well as, say, tax information. In this case, there are two functions: one is a collection of applicant's whereabouts, and another is a collection of tax details. Each task needs to be addressed with a distinct API and requires a separate service because an address change is logically a different event and not related to tax time reporting, i.e., why one needs to submit the tax information (again) for an address change. 

Levels of Granularity

Level of granularity should satisfy the specific needs of business functions or use cases. While the goal is to minimize calls across the network and for better performance, we must understand the set of operations that API consumers require and how they would give a better idea of the "correctly-grained" APIs in our designs.
At times, it may be appropriate that the API design supports both coarse-grained as well as fine-grained to give the flexibility for the API developers to choose the right APIs for their use cases.

Guidelines 

The following points may serve as some basic guidelines for the readers to decide their APIs granularity levels in their API modeling.
  • In general, consider that the services may be coarse-grained, and APIs are fine-grained.
  • Maintain a balance between the amount of response data and the number of resources required to provide that data. It will help decide the granularity.
  • The types of performed operations on the data should also be considered as part of the design when defining the granularity.
  • Read requests are normally coarse-grained. Returning all information as required to render the page; it won’t hurt as much as two separate API calls in some cases.
  • On the other hand, write requests must be fine-grained. Find out everyday operations clients needs, and provide a specific API for that use case.
  • At times, you should use medium grained, i.e., neither fine-grained or coarse-grained. An example could be as seen in the following sample where the nested resources are within two levels deep.
Image title

While the above guideline may understandably lead to more API deployment units, this can cause annoyances down the line. There are patterns, especially the API Gateway, that bring a better orchestration with those numerous APIs. Orchestrating the APIs with optimized endpoints, request collapsing, and much more helps in addressing the granularity challenges.

Friday, May 24, 2019

Importance of Ubiquitous Language in Domain-Driven Design

Once developers and domain experts speak the same language, production pipelines can move forward faster.

Most commercial software applications are created with a set of complex business requirements to solve specific business problems or needs. However, expecting all the software developers/architects to be experts on business domains and expecting them to know entire business functions is also impractical. On the other hand, how do we create software that brings value and has consumers who are in need of automation that will use the software? A software application cannot be just a showpiece of technical excellence, but in most cases, also real and usable of automated business excellence. The domain-driven design and models are the answers to those questions.
This short article talks about one of the key principles of Domain-Driven Design called "Ubiquitous Language" as DDD concepts, principles, and patterns bring technology and business excellence together to any sophisticated software applications that can be created and managed.

Talk Ubiquitously

Ubiquitous language is a model that acts as a universal language to help communication between software developers and domain experts.
Collaborating, learning, and defining a model brings a lot of initial communication barriers between software specialists and domain experts. So evolving domain model with practicing the same type of communications (discussions, writings, and in diagrams) within a context is paramount for successful implementations, and that sort of conversation is called Ubiquitous Language. It is structured around the domain model and extensively used by all the team members within a bounded context. It should be the medium or mode to connect all the activities of the team within the development of software.
The design team can establish deep understanding and connecting domain jargons and software entities with Ubiquitous language to keep discovering and evolving their domain models.

Ubiquitous Language


Equivalent Pseudo Code


Comments

We Administer Vaccines
AdministerVaccines {}
Not a core domain – need some more specific details
We administer flu Shots to patients
patientNeedAFluShot()
Better, may be missing some domain concepts
The nurse administers flu vaccines to a patient in standard doses
Nurse->administer vaccine(patient, Vaccine.getStandardDose())
Much better, and may be good to start with.

As we observe in the above table, there are various ways the user stories (requirements) can be given; however, the last row makes sense as it does have more clarity on what and how factors.
Hopefully, this article helps readers to get a glimpse of how DDD principles advocates and helps greater collaboration between subject matter experts, business analysts, nontechnology stakeholders with the technical/development community to produce complex domain driven systems.
[published @Dzone as well]

Sunday, May 19, 2019

REST API Security Vulnerabilities

** Published at DZONE

Simple, schematic, faster to develop and quick deployments make APIs so popular and widely used. So, naturally, it brings various challenges to maintain its implementations and keep them secured from various threats, such as Man-in-the-Middle attacks, lack of XML encryptions, insecure endpoints, API URL parameters, and so on. REST API has similar vulnerabilities as a web application.

In this article, I will present a few common API vulnerabilities that every developer should be aware of and on the lookout for in their code.

API Exposing Sensitive Data and Protection Personal information, credit card information, health records, financial information, business information, and many other categories of personal information need protection, so we need to evaluate and determine the types of data being transmitted or stored and ensure critical data is protected with appropriate encryption algorithms and security measures. Some of the dos and don'ts of REST API security best practices are as follows:

  • Classification of data and apply controls according to these classifications 
  • Do not store sensitive information unless necessary and discard it as soon as possible. 
  • Use tokenization and truncation methods to prevent the exposure of sensitive data 
  • Encryption is a necessary and crucial protection measure 
  • Do not implement a cache for sensitive data (or disable cache for sensitive data transactions) 
  • Use salts and adaptive (configurable number of iterations) hashing methodologies for passwords 


Authentication Attacks
Authentication attacks are processes with which a hacker attempts to exploit the authentication process and gain unauthorized access. Bypass attack, brute-force attack (for passwords), verify impersonation, and reflection attack are a few types of authentication attacks. Basic authentication, authorization with default keys, and authorization with credentials are a few protection measures to safeguard our APIs.

Cross-Site Scripts
Cross-site scripts, also known as an XSS attack, is the process of injecting malicious code as part of the input to web services, usually through the browser to a different end-user. The malicious script, once injected, can access any cookies, session tokens, or sensitive information retained by the browser, or even it can masquerade the whole content of the rendered pages, XSS categorizes into server-side XSS and client-side XSS. Traditionally, XSS consist of three types; they are Reflected XSS, Stored XSS, and DOM XSS.

Cross-Site Request Forgery
Cross-site request forgery, also known as CSRF, sea-surf, or XSRF, is a vulnerability that web applications expose a possibility of the end user forced (by forged links, emails, HTML pages) to execute unwanted actions on a currently authenticated session. Synchronize token pattern, cookie-to-header token, double submit cookie, and client-side safeguards are common CSRF prevention methodologies.

Denial-of-Service (DoS) Attack
The Denial of Service is an attack intends to make the targeted machine reach its maximum load (capacity to serve the requests) quickly by sending numerous falsify requests, and so, the target system denies further genuine requests.

Injection Attack
The attacker supplies an untrusted input to the application, which gets executed/processed as a part of command or query, thus, this results in the partial or complete discourse of the application behavior and leads to consequences such as data theft, data loss, loss of data-integrity, DoS, and even leads to full system compromise.

Insecure Direct Object References
Insecure Direct Object References, or simply IDOR, is an equally harmful top API vulnerability; it occurs when an application exposes direct access to internal objects based on user inputs, such as Id, filename, and so on. You might have observed that many REST URIs expose some sort of IDs, especially for fetching resources. Let's take an example scenario to make it clear for the readers — say Bob is using an API client and he needs to get his file with ID 1001. He would need to use https://myapi.server.com/browse/file/id/1001, but assume he is trying Alice's file (1003), which he not supposed to, i.e. Bob tries to use the URL https://myapi.server.com/browse/file/id/1003, so he should be denied access. If not, then the API is exposed to the IDOR vulnerability.

Man-in-the-Middle (MITM) Attack
A Man-in-the-Middle attack is an attack from a perpetrator place in the middle of the network or communication between a genuine user and an application server. It intends to steal, eavesdrop, impersonate, and secretly relay, intercept, or alter communications including API messages between two communicating parties, besides appear as if a normal exchange of information is underway.

Replay Attacks and Spoofing
Replay attacks and spoofing, aka playback attacks, are network attacks in which a valid data transmissions (supposed to be only one time) being repeated many times (maliciously) by the attacker who spoofed the valid transaction and replays it as many times as they would like. While the server is expecting a valid transaction, it will not have any doubts as those requests is a valid transaction as per the server. However, it is a masqueraded request and leads to catastrophic effects for the clients. The protection measures include a one-time password with session identifiers, TTL (Time-To-Live) measures MAC implementation at the client side, and including the timestamps in the request along with secure protocol such as Kerberos protocol prevention, secure routing, and Challenge-Handshake Authentication Protocol (CHAP).

"One of the best REST API books for beginners" - BookAuthority

The best REST API books for beginners