{"id":401750,"date":"2026-04-16T15:00:07","date_gmt":"2026-04-16T15:00:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/401750\/"},"modified":"2026-04-16T15:00:07","modified_gmt":"2026-04-16T15:00:07","slug":"python-project-setup-2026-uv-ruff-ty-polars","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/401750\/","title":{"rendered":"Python Project Setup 2026: uv + Ruff + Ty + Polars"},"content":{"rendered":"<p>    <img decoding=\"async\" alt=\"Python Project Setup 2026: uv + Ruff + Ty + Polars\" width=\"100%\" class=\"perfmatters-lazy\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2026\/04\/kdn-mehreen-python-project-setup-2026-uv-ruff-ty-polars.png\"\/><br \/>Image by Editor<br \/>\n\u00a0<br \/>\n#\u00a0Introduction<\/p>\n<p>\u00a0<br \/><a href=\"https:\/\/www.python.org\/\" target=\"_blank\" rel=\"nofollow noopener\">Python<\/a> project setup used to mean making a dozen small decisions before you wrote your first useful line of code. Which environment manager? Which dependency tool? Which formatter? Which linter? Which type checker? And if your project touched data, were you supposed to start with <a href=\"https:\/\/pandas.pydata.org\/\" target=\"_blank\" rel=\"nofollow noopener\">pandas<\/a>, <a href=\"https:\/\/duckdb.org\/\" target=\"_blank\" rel=\"nofollow noopener\">DuckDB<\/a>, or something newer?<\/p>\n<p>In 2026, that setup can be much simpler.<\/p>\n<p>For most new projects, the cleanest default stack is:<\/p>\n<p><a href=\"https:\/\/astral.sh\/uv\" target=\"_blank\" rel=\"nofollow noopener\">uv<\/a> for Python installation, environments, dependency management, locking, and command running.<br \/>\n<a href=\"https:\/\/docs.astral.sh\/ruff\/\" target=\"_blank\" rel=\"nofollow noopener\">Ruff<\/a> for linting and formatting.<br \/>\n<a href=\"https:\/\/docs.astral.sh\/ty\/\" target=\"_blank\" rel=\"nofollow noopener\">Ty<\/a> for type checking.<br \/>\n<a href=\"https:\/\/pola.rs\/\" target=\"_blank\" rel=\"nofollow noopener\">Polars<\/a> for dataframe work.<\/p>\n<p>This stack is fast, modern, and notably coherent. Three of the four tools (uv, Ruff, and Ty) actually come from the same company, <a href=\"https:\/\/astral.sh\/\" target=\"_blank\" rel=\"nofollow noopener\">Astral<\/a>, which means they integrate seamlessly with each other and with your pyproject.toml.<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Understanding Why This Stack Works<\/p>\n<p>\u00a0<br \/>Older setups often looked like this:<\/p>\n<p>pyenv + pip + venv + pip-tools or Poetry + Black + isort + Flake8 + mypy + pandas<\/p>\n<p>\u00a0<\/p>\n<p>This worked, but it created significant overlap, inconsistency, and maintenance overhead. You had separate tools for environment setup, dependency locking, formatting, import sorting, linting, and typing. Every new project started with a choice explosion. The 2026 default stack collapses all of that. The end result is fewer tools, fewer configuration files, and less friction when onboarding contributors or wiring up continuous integration (CI). Before jumping into setup, let\u2019s take a quick look at what each tool in the 2026 stack is doing:<\/p>\n<p>uv: This is the base of your project setup. It creates the project, manages versions, handles dependencies, and runs your code. Instead of manually setting up virtual environments and installing packages, uv handles the heavy lifting. It keeps your environment consistent using a lockfile and ensures everything is correct before running any command.<br \/>\nRuff: This is your all-in-one tool for code quality. It is extremely fast, checks for issues, fixes many of them automatically, and also formats your code. You can use it instead of tools like Black, isort, Flake8, and others.<br \/>\nTy: This is a newer tool for type checking. It helps catch errors by checking types in your code and works with various editors. While newer than tools like mypy or <a href=\"https:\/\/github.com\/microsoft\/pyright\" target=\"_blank\" rel=\"nofollow noopener\">Pyright<\/a>, it is optimized for modern workflows.<br \/>\nPolars: This is a modern library for working with dataframes. It focuses on efficient data processing using lazy execution, which means it optimizes queries before running them. This makes it faster and more memory efficient than pandas, especially for large data tasks.<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Reviewing Prerequisites<\/p>\n<p>\u00a0<br \/>The setup is quite simple. Here are the few things you need to get started:<\/p>\n<p>Terminal: macOS Terminal, Windows PowerShell, or any Linux shell.<br \/>\nInternet connection: Required for the one-time uv installer and package downloads.<br \/>\nCode editor: <a href=\"https:\/\/code.visualstudio.com\/\" target=\"_blank\" rel=\"nofollow noopener\">VS Code<\/a> is recommended because it works well with Ruff and Ty, but any editor is fine.<br \/>\nGit: Required for version control; note that uv initializes a <a href=\"https:\/\/git-scm.com\/\" target=\"_blank\" rel=\"nofollow noopener\">Git<\/a> repository automatically.<\/p>\n<p>That is it. You do not need Python pre-installed. You do not need pip, venv, pyenv, or conda. uv handles installation and environment management for you.<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Step 1: Installing uv<\/p>\n<p>\u00a0<br \/>uv provides a standalone installer that works on macOS, Linux, and Windows without requiring Python or <a href=\"https:\/\/www.rust-lang.org\/\" target=\"_blank\" rel=\"nofollow noopener\">Rust<\/a> to be present on your machine.<\/p>\n<p>macOS and Linux:<\/p>\n<p>curl -LsSf https:\/\/astral.sh\/uv\/install.sh | sh<\/p>\n<p>\u00a0<\/p>\n<p>Windows PowerShell:<\/p>\n<p>powershell -ExecutionPolicy ByPass -c &#8220;irm https:\/\/astral.sh\/uv\/install.ps1 | iex&#8221;<\/p>\n<p>\u00a0<\/p>\n<p>After installation, restart your terminal and verify:<\/p>\n<p>\u00a0<\/p>\n<p>Output:<\/p>\n<p>uv 0.8.0 (Homebrew 2025-07-17)<\/p>\n<p>\u00a0<\/p>\n<p>This single binary now replaces pyenv, pip, venv, pip-tools, and the project management layer of Poetry.<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Step 2: Creating a New Project<\/p>\n<p>\u00a0<br \/>Navigate to your project directory and scaffold a new one:<\/p>\n<p>uv init my-project&#13;<br \/>\ncd my-project<\/p>\n<p>\u00a0<\/p>\n<p>uv creates a clean starting structure:<\/p>\n<p>my-project\/&#13;<br \/>\n\u251c\u2500\u2500 .python-version&#13;<br \/>\n\u251c\u2500\u2500 pyproject.toml&#13;<br \/>\n\u251c\u2500\u2500 README.md&#13;<br \/>\n\u2514\u2500\u2500 main.py<\/p>\n<p>\u00a0<\/p>\n<p>Reshape it into a src\/ layout, which improves imports, packaging, test isolation, and type-checker configuration:<\/p>\n<p>mkdir -p src\/my_project tests data\/raw data\/processed&#13;<br \/>\nmv main.py src\/my_project\/main.py&#13;<br \/>\ntouch src\/my_project\/__init__.py tests\/test_main.py<\/p>\n<p>\u00a0<\/p>\n<p>Your structure should now look like this:<\/p>\n<p>my-project\/&#13;<br \/>\n\u251c\u2500\u2500 .python-version&#13;<br \/>\n\u251c\u2500\u2500 README.md&#13;<br \/>\n\u251c\u2500\u2500 pyproject.toml&#13;<br \/>\n\u251c\u2500\u2500 uv.lock&#13;<br \/>\n\u251c\u2500\u2500 src\/&#13;<br \/>\n\u2502\u00a0 \u00a0\u2514\u2500\u2500 my_project\/&#13;<br \/>\n\u2502\u00a0 \u00a0 \u00a0 \u00a0\u251c\u2500\u2500 __init__.py&#13;<br \/>\n\u2502\u00a0 \u00a0 \u00a0 \u00a0\u2514\u2500\u2500 main.py&#13;<br \/>\n\u251c\u2500\u2500 tests\/&#13;<br \/>\n\u2502\u00a0 \u00a0\u2514\u2500\u2500 test_main.py&#13;<br \/>\n\u2514\u2500\u2500 data\/&#13;<br \/>\n\u00a0 \u00a0 \u251c\u2500\u2500 raw\/&#13;<br \/>\n\u00a0 \u00a0 \u2514\u2500\u2500 processed\/<\/p>\n<p>\u00a0<\/p>\n<p>If you need a specific version (e.g. 3.12), uv can install and pin it:<\/p>\n<p>uv python install 3.12&#13;<br \/>\nuv python pin 3.12<\/p>\n<p>\u00a0<\/p>\n<p>The pin command writes the version to .python-version, ensuring every team member uses the same interpreter.<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Step 3: Adding Dependencies<\/p>\n<p>\u00a0<br \/>Adding dependencies is a single command that resolves, installs, and locks simultaneously:<\/p>\n<p>\u00a0<\/p>\n<p>uv automatically creates a virtual environment (.venv\/) if one does not exist, resolves the dependency tree, installs packages, and updates uv.lock with exact, pinned versions.<\/p>\n<p>For tools needed only during development, use the &#8211;dev flag:<\/p>\n<p>uv add &#8211;dev ruff ty pytest<\/p>\n<p>\u00a0<\/p>\n<p>This places them in a separate [dependency-groups] section in pyproject.toml, keeping production dependencies lean. You never need to run source .venv\/bin\/activate; when you use uv run, it automatically activates the correct environment.<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Step 4: Configuring Ruff (Linting and Formatting)<\/p>\n<p>\u00a0<br \/>Ruff is configured directly inside your pyproject.toml. Add the following sections:<\/p>\n<p>[tool.ruff]&#13;<br \/>\nline-length = 100&#13;<br \/>\ntarget-version = &#8220;py312&#8243;&#13;<br \/>\n&#13;<br \/>\n[tool.ruff.lint]&#13;<br \/>\nselect = [&#8220;E4&#8221;, &#8220;E7&#8221;, &#8220;E9&#8221;, &#8220;F&#8221;, &#8220;B&#8221;, &#8220;I&#8221;, &#8220;UP&#8221;]&#13;<br \/>\n&#13;<br \/>\n[tool.ruff.format]&#13;<br \/>\ndocstring-code-format = true&#13;<br \/>\nquote-style = &#8220;double&#8221;<\/p>\n<p>\u00a0<\/p>\n<p>A 100-character line length is a good compromise for modern screens. Rule groups <a href=\"https:\/\/github.com\/PyCQA\/flake8-bugbear\" target=\"_blank\" rel=\"nofollow noopener\">flake8-bugbear<\/a> (B), <a href=\"https:\/\/pycqa.github.io\/isort\/\" target=\"_blank\" rel=\"nofollow noopener\">isort<\/a> (I), and <a href=\"https:\/\/github.com\/asottile\/pyupgrade\" target=\"_blank\" rel=\"nofollow noopener\">pyupgrade<\/a> (UP) add real value without overwhelming a new repository.<\/p>\n<p>Running Ruff:<\/p>\n<p># Lint your code&#13;<br \/>\nuv run ruff check .&#13;<br \/>\n&#13;<br \/>\n# Auto-fix issues where possible&#13;<br \/>\nuv run ruff check &#8211;fix .&#13;<br \/>\n&#13;<br \/>\n# Format your code&#13;<br \/>\nuv run ruff format .<\/p>\n<p>\u00a0<\/p>\n<p>Notice the pattern: uv run  . You never install tools globally or activate environments manually.<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Step 5: Configuring Ty for Type Checking<\/p>\n<p>\u00a0<br \/>Ty is also configured in pyproject.toml. Add these sections:<\/p>\n<p>[tool.ty.environment]&#13;<br \/>\nroot = [&#8220;.\/src&#8221;]&#13;<br \/>\n&#13;<br \/>\n[tool.ty.rules]&#13;<br \/>\nall = &#8220;warn&#8221;&#13;<br \/>\n&#13;<br \/>\n[[tool.ty.overrides]]&#13;<br \/>\ninclude = [&#8220;src\/**&#8221;]&#13;<br \/>\n&#13;<br \/>\n[tool.ty.overrides.rules]&#13;<br \/>\npossibly-unresolved-reference = &#8220;error&#8221;&#13;<br \/>\n&#13;<br \/>\n[tool.ty.terminal]&#13;<br \/>\nerror-on-warning = false&#13;<br \/>\noutput-format = &#8220;full&#8221;<\/p>\n<p>\u00a0<\/p>\n<p>This configuration starts Ty in warning mode, which is ideal for adoption. You fix obvious issues first, then gradually promote rules to errors. Keeping data\/** excluded prevents type-checker noise from non-code directories.<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Step 6: Configuring pytest<\/p>\n<p>\u00a0<br \/>Add a section for pytest:<\/p>\n<p>[tool.pytest.ini_options]&#13;<br \/>\ntestpaths = [&#8220;tests&#8221;]<\/p>\n<p>\u00a0<\/p>\n<p>Run your test suite with:<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Step 7: Examining the Complete pyproject.toml<\/p>\n<p>\u00a0<br \/>Here is what your final configuration looks like with everything wired up \u2014 one file, every tool configured, with no scattered config files:<\/p>\n<p>[project]&#13;<br \/>\nname = &#8220;my-project&#8221;&#13;<br \/>\nversion = &#8220;0.1.0&#8221;&#13;<br \/>\ndescription = &#8220;Modern Python project with uv, Ruff, Ty, and Polars&#8221;&#13;<br \/>\nreadme = &#8220;README.md&#8221;&#13;<br \/>\nrequires-python = &#8220;&gt;=3.13&#8243;&#13;<br \/>\ndependencies = [&#13;<br \/>\n    &#8220;polars&gt;=1.39.3&#8221;,&#13;<br \/>\n]&#13;<br \/>\n&#13;<br \/>\n[dependency-groups]&#13;<br \/>\ndev = [&#13;<br \/>\n    &#8220;pytest&gt;=9.0.2&#8221;,&#13;<br \/>\n    &#8220;ruff&gt;=0.15.8&#8221;,&#13;<br \/>\n    &#8220;ty&gt;=0.0.26&#8221;,&#13;<br \/>\n]&#13;<br \/>\n&#13;<br \/>\n[tool.ruff]&#13;<br \/>\nline-length = 100&#13;<br \/>\ntarget-version = &#8220;py312&#8243;&#13;<br \/>\n&#13;<br \/>\n[tool.ruff.lint]&#13;<br \/>\nselect = [&#8220;E4&#8221;, &#8220;E7&#8221;, &#8220;E9&#8221;, &#8220;F&#8221;, &#8220;B&#8221;, &#8220;I&#8221;, &#8220;UP&#8221;]&#13;<br \/>\n&#13;<br \/>\n[tool.ruff.format]&#13;<br \/>\ndocstring-code-format = true&#13;<br \/>\nquote-style = &#8220;double&#8221;&#13;<br \/>\n&#13;<br \/>\n[tool.ty.environment]&#13;<br \/>\nroot = [&#8220;.\/src&#8221;]&#13;<br \/>\n&#13;<br \/>\n[tool.ty.rules]&#13;<br \/>\nall = &#8220;warn&#8221;&#13;<br \/>\n&#13;<br \/>\n[[tool.ty.overrides]]&#13;<br \/>\ninclude = [&#8220;src\/**&#8221;]&#13;<br \/>\n&#13;<br \/>\n[tool.ty.overrides.rules]&#13;<br \/>\npossibly-unresolved-reference = &#8220;error&#8221;&#13;<br \/>\n&#13;<br \/>\n[tool.ty.terminal]&#13;<br \/>\nerror-on-warning = false&#13;<br \/>\noutput-format = &#8220;full&#8221;&#13;<br \/>\n&#13;<br \/>\n[tool.pytest.ini_options]&#13;<br \/>\ntestpaths = [&#8220;tests&#8221;]<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Step 8: Writing Code with Polars<\/p>\n<p>\u00a0<br \/>Replace the contents of src\/my_project\/main.py with code that exercises the Polars side of the stack:<\/p>\n<p>&#8220;&#8221;&#8221;Sample data analysis with Polars.&#8221;&#8221;&#8221;&#13;<br \/>\n&#13;<br \/>\nimport polars as pl&#13;<br \/>\n&#13;<br \/>\ndef build_report(path: str) -&gt; pl.DataFrame:&#13;<br \/>\n    &#8220;&#8221;&#8221;Build a revenue summary from raw data using the lazy API.&#8221;&#8221;&#8221;&#13;<br \/>\n    q = (&#13;<br \/>\n        pl.scan_csv(path)&#13;<br \/>\n        .filter(pl.col(&#8220;status&#8221;) == &#8220;active&#8221;)&#13;<br \/>\n        .with_columns(&#13;<br \/>\n            revenue_per_user=(pl.col(&#8220;revenue&#8221;) \/ pl.col(&#8220;users&#8221;)).alias(&#8220;rpu&#8221;)&#13;<br \/>\n        )&#13;<br \/>\n        .group_by(&#8220;segment&#8221;)&#13;<br \/>\n        .agg(&#13;<br \/>\n            pl.len().alias(&#8220;rows&#8221;),&#13;<br \/>\n            pl.col(&#8220;revenue&#8221;).sum().alias(&#8220;revenue&#8221;),&#13;<br \/>\n            pl.col(&#8220;rpu&#8221;).mean().alias(&#8220;avg_rpu&#8221;),&#13;<br \/>\n        )&#13;<br \/>\n        .sort(&#8220;revenue&#8221;, descending=True)&#13;<br \/>\n    )&#13;<br \/>\n    return q.collect()&#13;<br \/>\n&#13;<br \/>\ndef main() -&gt; None:&#13;<br \/>\n    &#8220;&#8221;&#8221;Entry point with sample in-memory data.&#8221;&#8221;&#8221;&#13;<br \/>\n    df = pl.DataFrame(&#13;<br \/>\n        {&#13;<br \/>\n            &#8220;segment&#8221;: [&#8220;Enterprise&#8221;, &#8220;SMB&#8221;, &#8220;Enterprise&#8221;, &#8220;SMB&#8221;, &#8220;Enterprise&#8221;],&#13;<br \/>\n            &#8220;status&#8221;: [&#8220;active&#8221;, &#8220;active&#8221;, &#8220;churned&#8221;, &#8220;active&#8221;, &#8220;active&#8221;],&#13;<br \/>\n            &#8220;revenue&#8221;: [12000, 3500, 8000, 4200, 15000],&#13;<br \/>\n            &#8220;users&#8221;: [120, 70, 80, 84, 150],&#13;<br \/>\n        }&#13;<br \/>\n    )&#13;<br \/>\n&#13;<br \/>\n    summary = (&#13;<br \/>\n        df.lazy()&#13;<br \/>\n        .filter(pl.col(&#8220;status&#8221;) == &#8220;active&#8221;)&#13;<br \/>\n        .with_columns(&#13;<br \/>\n            (pl.col(&#8220;revenue&#8221;) \/ pl.col(&#8220;users&#8221;)).round(2).alias(&#8220;rpu&#8221;)&#13;<br \/>\n        )&#13;<br \/>\n        .group_by(&#8220;segment&#8221;)&#13;<br \/>\n        .agg(&#13;<br \/>\n            pl.len().alias(&#8220;rows&#8221;),&#13;<br \/>\n            pl.col(&#8220;revenue&#8221;).sum().alias(&#8220;total_revenue&#8221;),&#13;<br \/>\n            pl.col(&#8220;rpu&#8221;).mean().round(2).alias(&#8220;avg_rpu&#8221;),&#13;<br \/>\n        )&#13;<br \/>\n        .sort(&#8220;total_revenue&#8221;, descending=True)&#13;<br \/>\n        .collect()&#13;<br \/>\n    )&#13;<br \/>\n&#13;<br \/>\n    print(&#8220;Revenue Summary:&#8221;)&#13;<br \/>\n    print(summary)&#13;<br \/>\n&#13;<br \/>\nif __name__ == &#8220;__main__&#8221;:&#13;<br \/>\n    main()<\/p>\n<p>\u00a0<\/p>\n<p>Before running, you need a build system in pyproject.toml so uv installs your project as a package. We will use <a href=\"https:\/\/hatch.pypa.io\/\" target=\"_blank\" rel=\"nofollow noopener\">Hatchling<\/a>:<\/p>\n<p>cat &gt;&gt; pyproject.toml &lt;&lt; &#8216;EOF&#8217;&#13;<br \/>\n&#13;<br \/>\n[build-system]&#13;<br \/>\nrequires = [&#8220;hatchling&#8221;]&#13;<br \/>\nbuild-backend = &#8220;hatchling.build&#8221;&#13;<br \/>\n&#13;<br \/>\n[tool.hatch.build.targets.wheel]&#13;<br \/>\npackages = [&#8220;src\/my_project&#8221;]&#13;<br \/>\nEOF<\/p>\n<p>\u00a0<\/p>\n<p>Then sync and run:<\/p>\n<p>uv sync&#13;<br \/>\nuv run python -m my_project.main<\/p>\n<p>\u00a0<\/p>\n<p>You should see a formatted Polars table:<\/p>\n<p>Revenue Summary:&#13;<br \/>\nshape: (2, 4)&#13;<br \/>\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510&#13;<br \/>\n\u2502 segment    \u2506 rows \u2506 total_revenue \u2506 avg_rpu \u2502&#13;<br \/>\n\u2502 &#8212;        \u2506 &#8212;  \u2506 &#8212;           \u2506 &#8212;     \u2502&#13;<br \/>\n\u2502 str        \u2506 u32  \u2506 i64           \u2506 f64     \u2502&#13;<br \/>\n\u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561&#13;<br \/>\n\u2502 Enterprise \u2506 2    \u2506 27000         \u2506 100.0   \u2502&#13;<br \/>\n\u2502 SMB        \u2506 2    \u2506 7700          \u2506 50.0    \u2502&#13;<br \/>\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Managing the Daily Workflow<\/p>\n<p>\u00a0<br \/>Once the project is set up, the day-to-day loop is straightforward:<\/p>\n<p># Pull latest, sync dependencies&#13;<br \/>\ngit pull&#13;<br \/>\nuv sync&#13;<br \/>\n&#13;<br \/>\n# Write code&#8230;&#13;<br \/>\n&#13;<br \/>\n# Before committing: lint, format, type-check, test&#13;<br \/>\nuv run ruff check &#8211;fix .&#13;<br \/>\nuv run ruff format .&#13;<br \/>\nuv run ty check&#13;<br \/>\nuv run pytest&#13;<br \/>\n&#13;<br \/>\n# Commit&#13;<br \/>\ngit add .&#13;<br \/>\ngit commit -m &#8220;feat: add revenue report module&#8221;<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Changing the Way You Write Python with Polars<\/p>\n<p>\u00a0<br \/>The biggest mindset shift in this stack is on the data side. With Polars, your defaults should be:<\/p>\n<p>Expressions over row-wise operations. Polars expressions let the engine vectorize and parallelize operations. Avoid user defined functions (UDFs) unless there is no native alternative, as UDFs are significantly slower.<br \/>\nLazy execution over eager loading. Use scan_csv() instead of read_csv(). This creates a LazyFrame that builds a query plan, allowing the optimizer to push filters down and eliminate unused columns.<br \/>\nParquet-first workflows over CSV-heavy pipelines. A good pattern for internal data preparation looks like this.<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Evaluating When This Setup Is Not the Best Fit<\/p>\n<p>\u00a0<br \/>You may want a different choice if:<\/p>\n<p>Your team has a mature Poetry or mypy workflow that is working well.<br \/>\nYour codebase depends heavily on pandas-specific APIs or ecosystem libraries.<br \/>\nYour organization is standardized on Pyright.<br \/>\nYou are working in a legacy repository where changing tools would create more disruption than value.<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Implementing Pro Tips<\/p>\n<p>\u00a0<\/p>\n<p>Never activate virtual environments manually. Use uv run for everything to ensure you are using the correct environment.<br \/>\nAlways commit uv.lock to version control. This ensures the project runs identically on every machine.<br \/>\nUse &#8211;frozen in CI. This installs dependencies from the lockfile for faster, more reliable builds.<br \/>\nUse uvx for one-off tools. Run tools without installing them in your project.<br \/>\nUse Ruff&#8217;s &#8211;fix flag liberally. It can auto-fix unused imports, outdated syntax, and more.<br \/>\nPrefer the lazy API by default. Use scan_csv() and only call .collect() at the end.<br \/>\nCentralize configuration. Use pyproject.toml as the single source of truth for all tools.<\/p>\n<p>\u00a0<\/p>\n<p>#\u00a0Concluding Thoughts<\/p>\n<p>\u00a0<br \/>The 2026 Python default stack reduces setup effort and encourages better practices: locked environments, a single configuration file, fast feedback, and optimized data pipelines. Give it a try; once you experience environment-agnostic execution, you will understand why developers are switching.<br \/>\u00a0<br \/>\u00a0<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/kanwal-mehreen1\/\" rel=\"noopener nofollow\" target=\"_blank\"><a href=\"https:\/\/www.linkedin.com\/in\/kanwal-mehreen1\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Kanwal Mehreen<\/a><\/a> is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook &#8220;Maximizing Productivity with ChatGPT&#8221;. As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She&#8217;s also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.<\/p>\n","protected":false},"excerpt":{"rendered":"Image by Editor \u00a0 #\u00a0Introduction \u00a0Python project setup used to mean making a dozen small decisions before you&hellip;\n","protected":false},"author":2,"featured_media":401751,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[61,60,80],"class_list":{"0":"post-401750","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technology","8":"tag-ie","9":"tag-ireland","10":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/401750","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=401750"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/401750\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/401751"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=401750"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=401750"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=401750"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}