MarkTechPost@AI 06月18日
Building High-Performance Financial Analytics Pipelines with Polars: Lazy Evaluation, Advanced Expressions, and SQL Integration
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍如何使用Polars构建一个高效的数据分析管道,处理大规模金融数据集。通过生成合成金融时间序列数据集,文章逐步展示了从特征工程、滚动统计到多维分析和排名的全流程。Polars的延迟计算、复杂表达式、窗口函数和SQL接口被用于高效处理数据,同时保持低内存使用和快速执行。文章还演示了如何使用Polars进行数据转换,并展示了如何导出结果。

💡文章首先导入必要的库,包括用于高性能DataFrame操作的Polars和用于生成合成数据的NumPy。为了确保兼容性,添加了Polars的备用安装步骤。

📈随后,文章生成了一个包含100,000条记录的合成金融数据集,模拟了主要股票的每日数据,包括价格、交易量、买卖价差、市值和行业等关键市场特征。

⚙️接下来,文章将数据集加载到Polars LazyFrame中,以实现延迟执行,从而可以高效地链接复杂的转换。然后,使用窗口和滚动函数,添加基于时间的特征并应用技术指标,如移动平均线、RSI和布林带。

📊文章接着按照股票代码、年份和季度进行分组聚合,提取关键的财务统计数据和指标。最后,根据交易量和波动性对结果进行排名,过滤掉交易量不足的细分市场,并对数据进行排序以方便探索。

🔍文章还展示了Polars的SQL接口,运行聚合查询以分析2021年后的股票表现。通过性能指标强调延迟评估、内存效率和零拷贝执行,并演示了如何将结果导出为多种格式。

In this tutorial, we delve into building an advanced data analytics pipeline using Polars, a lightning-fast DataFrame library designed for optimal performance and scalability. Our goal is to demonstrate how we can utilize Polars’ lazy evaluation, complex expressions, window functions, and SQL interface to process large-scale financial datasets efficiently. We begin by generating a synthetic financial time series dataset and move step-by-step through an end-to-end pipeline, from feature engineering and rolling statistics to multi-dimensional analysis and ranking. Throughout, we demonstrate how Polars empowers us to write expressive and performant data transformations, all while maintaining low memory usage and ensuring fast execution.

import polars as plimport numpy as npfrom datetime import datetime, timedeltaimport iotry:    import polars as plexcept ImportError:    import subprocess    subprocess.run(["pip", "install", "polars"], check=True)    import polars as plprint(" Advanced Polars Analytics Pipeline")print("=" * 50)

We begin by importing the essential libraries, including Polars for high-performance DataFrame operations and NumPy for generating synthetic data. To ensure compatibility, we add a fallback installation step for Polars in case it isn’t already installed. With the setup ready, we signal the start of our advanced analytics pipeline.

np.random.seed(42)n_records = 100000dates = [datetime(2020, 1, 1) + timedelta(days=i//100) for i in range(n_records)]tickers = np.random.choice(['AAPL', 'GOOGL', 'MSFT', 'TSLA', 'AMZN'], n_records)# Create complex synthetic datasetdata = {    'timestamp': dates,    'ticker': tickers,    'price': np.random.lognormal(4, 0.3, n_records),    'volume': np.random.exponential(1000000, n_records).astype(int),    'bid_ask_spread': np.random.exponential(0.01, n_records),    'market_cap': np.random.lognormal(25, 1, n_records),    'sector': np.random.choice(['Tech', 'Finance', 'Healthcare', 'Energy'], n_records)}print(f" Generated {n_records:,} synthetic financial records")

We generate a rich, synthetic financial dataset with 100,000 records using NumPy, simulating daily stock data for major tickers such as AAPL and TSLA. Each entry includes key market features such as price, volume, bid-ask spread, market cap, and sector. This provides a realistic foundation for demonstrating advanced Polars analytics on a time-series dataset.

lf = pl.LazyFrame(data)result = (    lf    .with_columns([        pl.col('timestamp').dt.year().alias('year'),        pl.col('timestamp').dt.month().alias('month'),        pl.col('timestamp').dt.weekday().alias('weekday'),        pl.col('timestamp').dt.quarter().alias('quarter')    ])       .with_columns([        pl.col('price').rolling_mean(20).over('ticker').alias('sma_20'),        pl.col('price').rolling_std(20).over('ticker').alias('volatility_20'),               pl.col('price').ewm_mean(span=12).over('ticker').alias('ema_12'),               pl.col('price').diff().alias('price_diff'),               (pl.col('volume') * pl.col('price')).alias('dollar_volume')    ])       .with_columns([        pl.col('price_diff').clip(0, None).rolling_mean(14).over('ticker').alias('rsi_up'),        pl.col('price_diff').abs().rolling_mean(14).over('ticker').alias('rsi_down'),               (pl.col('price') - pl.col('sma_20')).alias('bb_position')    ])       .with_columns([        (100 - (100 / (1 + pl.col('rsi_up') / pl.col('rsi_down')))).alias('rsi')    ])       .filter(        (pl.col('price') > 10) &        (pl.col('volume') > 100000) &        (pl.col('sma_20').is_not_null())    )       .group_by(['ticker', 'year', 'quarter'])    .agg([        pl.col('price').mean().alias('avg_price'),        pl.col('price').std().alias('price_volatility'),        pl.col('price').min().alias('min_price'),        pl.col('price').max().alias('max_price'),        pl.col('price').quantile(0.5).alias('median_price'),               pl.col('volume').sum().alias('total_volume'),        pl.col('dollar_volume').sum().alias('total_dollar_volume'),               pl.col('rsi').filter(pl.col('rsi').is_not_null()).mean().alias('avg_rsi'),        pl.col('volatility_20').mean().alias('avg_volatility'),        pl.col('bb_position').std().alias('bollinger_deviation'),               pl.len().alias('trading_days'),        pl.col('sector').n_unique().alias('sectors_count'),               (pl.col('price') > pl.col('sma_20')).mean().alias('above_sma_ratio'),               ((pl.col('price').max() - pl.col('price').min()) / pl.col('price').min())          .alias('price_range_pct')    ])       .with_columns([        pl.col('total_dollar_volume').rank(method='ordinal', descending=True).alias('volume_rank'),        pl.col('price_volatility').rank(method='ordinal', descending=True).alias('volatility_rank')    ])       .filter(pl.col('trading_days') >= 10)    .sort(['ticker', 'year', 'quarter']))

We load our synthetic dataset into a Polars LazyFrame to enable deferred execution, allowing us to chain complex transformations efficiently. From there, we enrich the data with time-based features and apply advanced technical indicators, such as moving averages, RSI, and Bollinger bands, using window and rolling functions. We then perform grouped aggregations by ticker, year, and quarter to extract key financial statistics and indicators. Finally, we rank the results based on volume and volatility, filter out under-traded segments, and sort the data for intuitive exploration, all while leveraging Polars’ powerful lazy evaluation engine to its full advantage.

df = result.collect()print(f"\n Analysis Results: {df.height:,} aggregated records")print("\nTop 10 High-Volume Quarters:")print(df.sort('total_dollar_volume', descending=True).head(10).to_pandas())print("\n Advanced Analytics:")pivot_analysis = (    df.group_by('ticker')    .agg([        pl.col('avg_price').mean().alias('overall_avg_price'),        pl.col('price_volatility').mean().alias('overall_volatility'),        pl.col('total_dollar_volume').sum().alias('lifetime_volume'),        pl.col('above_sma_ratio').mean().alias('momentum_score'),        pl.col('price_range_pct').mean().alias('avg_range_pct')    ])    .with_columns([        (pl.col('overall_avg_price') / pl.col('overall_volatility')).alias('risk_adj_score'),               (pl.col('momentum_score') * 0.4 +         pl.col('avg_range_pct') * 0.3 +         (pl.col('lifetime_volume') / pl.col('lifetime_volume').max()) * 0.3)         .alias('composite_score')    ])    .sort('composite_score', descending=True))print("\n Ticker Performance Ranking:")print(pivot_analysis.to_pandas())

Once our lazy pipeline is complete, we collect the results into a DataFrame and immediately review the top 10 quarters based on total dollar volume. This helps us identify periods of intense trading activity. We then take our analysis a step further by grouping the data by ticker to compute higher-level insights, such as lifetime trading volume, average price volatility, and a custom composite score. This multi-dimensional summary helps us compare stocks not just by raw volume, but also by momentum and risk-adjusted performance, unlocking deeper insights into overall ticker behavior.

print("\n SQL Interface Demo:")pl.Config.set_tbl_rows(5)sql_result = pl.sql("""    SELECT        ticker,        AVG(avg_price) as mean_price,        STDDEV(price_volatility) as volatility_consistency,        SUM(total_dollar_volume) as total_volume,        COUNT(*) as quarters_tracked    FROM df    WHERE year >= 2021    GROUP BY ticker    ORDER BY total_volume DESC""", eager=True)print(sql_result)print(f"\n Performance Metrics:")print(f"   • Lazy evaluation optimizations applied")print(f"   • {n_records:,} records processed efficiently")print(f"   • Memory-efficient columnar operations")print(f"   • Zero-copy operations where possible")print(f"\n Export Options:")print("   • Parquet (high compression): df.write_parquet('data.parquet')")print("   • Delta Lake: df.write_delta('delta_table')")print("   • JSON streaming: df.write_ndjson('data.jsonl')")print("   • Apache Arrow: df.to_arrow()")print("\n Advanced Polars pipeline completed successfully!")print(" Demonstrated: Lazy evaluation, complex expressions, window functions,")print("   SQL interface, advanced aggregations, and high-performance analytics")

We wrap up the pipeline by showcasing Polars’ elegant SQL interface, running an aggregate query to analyze post-2021 ticker performance with familiar SQL syntax. This hybrid capability enables us to blend expressive Polars transformations with declarative SQL queries seamlessly. To highlight its efficiency, we print key performance metrics, emphasizing lazy evaluation, memory efficiency, and zero-copy execution. Finally, we demonstrate how easily we can export results in various formats, such as Parquet, Arrow, and JSONL, making this pipeline both powerful and production-ready. With that, we complete a full-circle, high-performance analytics workflow using Polars.

In conclusion, we’ve seen firsthand how Polars’ lazy API can optimize complex analytics workflows that would otherwise be sluggish in traditional tools. We’ve developed a comprehensive financial analysis pipeline, spanning from raw data ingestion to rolling indicators, grouped aggregations, and advanced scoring, all executed with blazing speed. Not only that, but we also tapped into Polars’ powerful SQL interface to run familiar queries seamlessly over our DataFrames. This dual ability to write both functional-style expressions and SQL makes Polars an incredibly flexible tool for any data scientist.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

The post Building High-Performance Financial Analytics Pipelines with Polars: Lazy Evaluation, Advanced Expressions, and SQL Integration appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Polars 数据分析 金融 LazyFrame
相关文章