Skip to main content
Showing results for 
Search instead for 
Did you mean: 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Reply
syl-ade
Helper I
Helper I

Session error status code 430

Hi all, 

 

I keep getting the same error:

[TooManyRequestsForCapacity] This spark job can't be run because you have hit a spark compute or API rate limit. To run this spark job, cancel an active Spark job through the Monitoring hub, choose a larger capacity SKU, or try again later. HTTP status code: 430 {Learn more} HTTP status code: 430.

 

I have no active Spark jobs in my capacity. 

 

Diagnostics:

{
  "timestamp": "2025-10-20T11:04:02.439Z",
  "transientCorrelation": "87ea6ece-dcb8-4ff1-85e2-55054972e4b1",
  "aznb": {
    "version": "1.6.124"
  },
  "notebook": {
    "notebookName": "Notebook 3",
    "instanceId": "e63f1378-ab9f-4f2a-8c8a-c2820afbaaa7",
    "documentId": "trident-w-94f8beea-a952-4fe6-8a3d-3c08e04019c7-a-21cefa3b-fc8f-46c6-9b58-ea22fd13b105",
    "workspaceId": "94f8beea-a952-4fe6-8a3d-3c08e04019c7",
    "kernelId": "d3e0e33a-2beb-41ab-9bba-8ce45e7f49be",
    "clientSessionId": "1b0157ce-9d6e-4746-a42f-00f917513c60",
    "kernelState": "not connected",
    "computeUrl": "https://927aa90fa114475394349df55fe653be.pbidedicated.windows.net/webapi/capacities/927AA90F-A114-4753-9434-9DF55FE653BE/workloads/Notebook/Data/Direct/api/workspaces/94f8beea-a952-4fe6-8a3d-3c08e04019c7/artifacts/21cefa3b-fc8f-46c6-9b58-ea22fd13b105/jupyterApi/versions/1",
    "computeState": "connected",
    "collaborationStatus": "offline / joined",
    "isSaveLeader": false
  },
  "synapseController": {
    "id": "e63f1378-ab9f-4f2a-8c8a-c2820afbaaa7:snc1",
    "enabled": true,
    "activeKernelHandler": "sparkLivy",
    "kernelMetadata": {
      "kernel": "synapse_pyspark",
      "language": "python"
    },
    "state": "error",
    "sessionId": "0a4fabaf-f7f2-4529-aee5-6631cfb90971",
    "applicationId": null,
    "applicationName": "",
    "sessionErrors": [
      "[TooManyRequestsForCapacity] This spark job can't be run because you have hit a spark compute or API rate limit. To run this spark job, cancel an active Spark job through the Monitoring hub, choose a larger capacity SKU, or try again later. HTTP status code: 430 {Learn more} HTTP status code: 430."
    ]
  }
}

 

What can I do to succesfully start a session?

1 ACCEPTED SOLUTION
rohit1991
Super User
Super User

Hi @syl-ade 

 

1. Check for active Spark sessions:

  • Go to Monitoring Hub - Spark tab.

  • End or cancel any running sessions. Even if you think none are active, check once - some sessions stay open in the background.

2. Wait a few minutes:

  • After stopping sessions, wait 5–10 minutes.

  • Spark sometimes keeps resources busy for a short time after a job ends.

3. Restart the notebook:

  • Close your current notebook.

  • Reopen it and run the first cell again - this starts a new, clean Spark session.

4. Check capacity limits:

  • If the error still appears, your workspace is at its capacity limit (SKU).

  • Contact your admin to increase Fabric capacity or use a higher SKU.

5. Try again later if needed:

  • When other users’ Spark jobs finish, your capacity frees up automatically.

  • Then you can start your session without issues.

 


Did it work? ✔ Give a Kudo • Mark as Solution – help others too!

View solution in original post

4 REPLIES 4
tayloramy
Community Champion
Community Champion

Hi @syl-ade

 

It appears that your capacity is at its limit. Check the Capacity Metrics App and see what is using all the capacity. If there's nothing that can be turned off or optimized, it may be time to upgrade your capacity. 

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution. 

Thanks, but Capacity Metrics does not work properly.

sylade_0-1760968708461.png

Anyway... I do not have any Spark Sessions currently running.

 

Hi @syl-ade

 

When the capacity reaches its limit, it usually has been over extended for a while and takes a while to burndown before it is usable again. 

see Understand your Fabric capacity throttling - Microsoft Fabric | Microsoft Learn

 

In capacity metrics app, ensure you have your capacity selected and the semantic model has refreshed. 

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution. 

rohit1991
Super User
Super User

Hi @syl-ade 

 

1. Check for active Spark sessions:

  • Go to Monitoring Hub - Spark tab.

  • End or cancel any running sessions. Even if you think none are active, check once - some sessions stay open in the background.

2. Wait a few minutes:

  • After stopping sessions, wait 5–10 minutes.

  • Spark sometimes keeps resources busy for a short time after a job ends.

3. Restart the notebook:

  • Close your current notebook.

  • Reopen it and run the first cell again - this starts a new, clean Spark session.

4. Check capacity limits:

  • If the error still appears, your workspace is at its capacity limit (SKU).

  • Contact your admin to increase Fabric capacity or use a higher SKU.

5. Try again later if needed:

  • When other users’ Spark jobs finish, your capacity frees up automatically.

  • Then you can start your session without issues.

 


Did it work? ✔ Give a Kudo • Mark as Solution – help others too!

Helpful resources

Announcements
FabCon Global Hackathon Carousel

FabCon Global Hackathon

Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!

September Fabric Update Carousel

Fabric Monthly Update - September 2025

Check out the September 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.