Apache Spark has become a foundation for big data processing and analytics. As its use has expanded in Microsoft fabric, different session modes have been introduced to carry out various use cases. Among these, New Standard Spark Sessions and New High Concurrency Sessions stand out, each offering unique capabilities and advantages. In this blog, we’ll explore these session types, their ideal use cases, and how to get started with them.

New Standard Spark Sessions:
When you create a notebook in a Fabric-enabled workspace, the default connection is a New Standard Spark Session. This session represents a dedicated environment for running Spark jobs, ensuring that resources like memory and CPU are exclusively allocated to your tasks.
Key Features of the Standard Spark Sessions:
- Single User or Application Ownership: The session is exclusively owned by a single user or application, ensuring full control over the environment.
- Sequential Execution: Jobs are executed in a specific order within the same logical context, making it ideal for workflows that require step-by-step processing.
- Exclusive Resource Allocation: Resources such as memory and CPU are dedicated to a single process, eliminating possible interferences and enabling optimal performance for the task at hand.
Start a Standard Session:
Step 1: Navigate to a Fabric-enabled workspace and create a new notebook. Click on “New Items” at the top of the workspace to open a sidebar. Search for “notebook” and select the first option.

Step 2: Open the notebook, click on “Connect,” and select “New Standard Session.” The session will start within 5 to 10 seconds.

New High Concurrency Session:
The New High Concurrency Session is designed for users who need to run multiple notebooks simultaneously within the same session
Key features of High Concurrency Sessions:
- REPL (Read-Eval-Print Loop) Isolation: Notebooks or jobs run in isolated environments, ensuring that multiple notebooks sharing the same session do not interfere with each other.
- Session Sharing: Sessions can be shared among multiple notebooks, but only for a single user.
- Better Utilization: Users can manage allocated resources efficiently, running multiple notebooks simultaneously to optimize costs.
Current Limitations of High Concurrency Sessions:
- Only one user can share a running session at a time.
- All notebooks must have the same default lakehouse configuration.
- Sessions must share the same Spark compute properties..
Start a new high concurrency session:
Step 1: Ensure the feature is enabled in your workspace. Go to “Workspace Settings,” then click on “Data Engineering/Science” and select “Spark Settings.”
Step 2: Under the “High Concurrency” tab, turn on “Customize compute configurations for items” and click “Save.”

Step 3: Create two notebooks, for example here we have one for cleaning raw sales data and another for transforming data, such as grouping individual product sales by day.
Step 4: Open both notebooks side by side. Connect the first notebook, “Cleaning Raw Sales Data” to a “New High Concurrency Session.” The session will start with a name like “HC_NotebookName_…”
Step 5: In the second notebook, “Daily Product Sales”, click “Connect” and select the available high concurrency session (e.g., “HC_NotebookName_…”). Both notebooks can now run concurrently under the same session.


Standard vs. High Concurrency Sessions: A Comparison
Feature | New standard session | New high concurrency session |
Session Management | Based on SKU and capacity every notebook has individual session, and they are isolated from each other. | Multiple notebooks can share the same session but maintain resource isolation through REPLs. |
Resource Allocation | Fixed resources allocated to a session. | Dynamic resource allocation for efficient sharing. |
Overhead Cost | Requires a separate session for each application. | Reduced overhead cost by sharing resources dynamically. |
Performance | Allocating all computation power to a notebook will ensure better performance | Sharing sessions with multiple notebooks may reduce performance and increase execution time. |
Monitoring Running Notebooks
To monitor all running notebooks, go to the home page and click on “Monitor” from the left menu bar. Apply filters to view only the notebooks currently in progress.
Use Cases:
Standard session
This session is ideal for scenarios with a manageable number of notebooks and a requirement for sequential execution.
Example: Consider an ETL (Extract, Transform, Load) notebook that performs data extraction, transformation, and loading. The operations in this notebook need to be executed in a specific order, and there’s no need to run other notebooks concurrently. A standard session is the best choice here, as it keeps resources focused on completing one task at a time without the overhead of concurrency.
High concurrency session
This session type is designed for users who need to run multiple notebooks in parallel, maximizing productivity in scenarios where concurrent execution is essential. It allows seamless switching between notebooks without needing to terminate and reinitialize sessions, fostering a dynamic development environment.
Example:
Imagine a data scientist building a prototype for a machine learning model. They might need to explore data in one notebook while fine-tuning model parameters in another. A high-concurrency session enables both notebooks to run side-by-side, allowing the user to switch between them effortlessly, reducing setup time and enhancing efficiency.
Final Thoughts
In summary, the New Standard Spark Session is ideal for small to medium-sized workloads, providing optimal performance by efficiently allocating resources for sequential execution without unnecessary overhead. On the other hand, the New High Concurrency Session is suited for projects of all sizes, enabling multiple notebooks to run simultaneously, enhancing productivity and cost-effectiveness.
By offering these options, Microsoft Fabric allows users to align resource allocation with their workload demands, ensuring scalability, performance, and cost management are perfectly balanced. Whether you’re running a single ETL job or juggling multiple data science notebooks, choosing the right Spark session can make all the difference in your big data processing journey.