该用Python还是SQL?4个案例教你节省时间
在数据分析行业,对数据提出的每一个问题都可以用多种潜在的语言和工具包来回答。每种语言都有其优势,它们之间也存在着不同的区别。不能否认的是,有些操作用Python执行起来要比SQL更加高效。这篇文章分享了4个能够节省时间的案例,在这几个案例中,Python在探索和分析数据集方面远远优于SQL。
描述性统计
假设你正在探索新的数据集,可以使用许多不同的方法来统计描述这些数据,并得到总体感知。例如:
· 计数
· 平均值
· 标准偏差
· 最小值
· 第一四分位数
· 第二四分位数(中位数)
· 第三四分位数
· 最大值
要想在SQL中得到以上信息,你需要输入:
SELECT ROUND(MIN(first_class_rev)::numeric, 2) AS min,
ROUND(PERCENTILE_CONT(0.25) WITHIN
GROUP (ORDER BY first_class_rev)::numeric, 2) AS pc25,
ROUND(PERCENTILE_CONT(0.50) WITHIN
GROUP (ORDER BY first_class_rev)::numeric, 2) AS median,
ROUND(AVG(first_class_rev)::numeric, 2) AS mean,
ROUND(PERCENTILE_CONT(0.75) WITHIN
GROUP (ORDER BY first_class_rev)::numeric, 2) AS pc75,
ROUND(MAX(first_class_rev)::numeric, 2) AS max
FROM tutorial.flight_revenue
但也许你并不想输入以上所有代码。你可以使用pandas的DataFrame.describe()函数来得出基础数据集的基本描述性统计信息。
datasets['Python Input'].describe().round(1)
查看此报告并仿照该报告自行探索描述性统计。
移动平均值
假设你现在想计算移动平均值,以便于在输入不断变化的情况下得到其明确的平均值。移动平均值有助于消除数据骤降和峰值的影响,从而使长期趋势更加显而易见。
在SQL中,你可以输入这样的查询(query):
WITH input AS (
SELECT
COUNT(1) AS trips,
DATE_TRUNC('day',start_date) AS date
FROM modeanalytics.sf_bike_share_trip
GROUP BY 2
)
SELECT
trips,
AVG(trips) over (order BY date rows between 13 preceding AND current row) AS mvg_avg, -- Window function to calculate 14-day moving average of ride share trips
SUM(trips) over (order BY date rows unbounded preceding) AS running_total_trips, -- Window function to calculate the running total number of ride share trips
lag(trips,7) over (order BY date) AS num_trips_previous_day, -- Window function to grab the number of trips on the previous day
trips - lag(trips,7) over (order BY date) AS wow_difference,
(trips - lag(trips,7) over (order BY date))/lag(trips,7) over (order BY date)::DECIMAL(18,2) AS wow_percent_change, -- Window function to calculate the week-over-week percent increase in trips
date
FROM input
ORDER BY date;
在Python中,只需以下代码便可快速得到相同的两周移动平均值:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = datasets["Trips - Python Window"]
df["mvg_avg"] = df.trips.rolling(14).mean()
另外,Python能够进一步实现可视化。
枢轴
要想重新排列数据与枢轴以绘制图表或是演示文稿格式,在SQL中需要几个步骤才能实现。在这个案例中,需要将Mode Public Warehouse中大学橄榄球运动员的数据集从行枢轴转换到列枢轴。
首先,要汇总每年、每场比赛的运动员的信息来准备数据。
SELECT teams.conference AS conference,
players.year,
COUNT(1) AS players
FROM benn.college_football_players players
JOIN benn.college_football_teams teams
ON teams.school_name = players.school_name
GROUP BY 1,2
ORDER BY 1,2
转换数据之后,需要将查询(query)嵌入子查询(subquery)中。
SELECT *
FROM (
SELECT teams.conference AS conference,
players.year,
COUNT(1) AS players
FROM benn.college_football_players players
JOIN benn.college_football_teams teams
ON teams.school_name = players.school_name
GROUP BY 1,2
) sub
接下来,如果你想将结果按照年份分成几列,可以用SELECT语句为每个项单独创建列。
SELECT conference,
SUM(CASE WHEN year = 'FR' THEN players ELSE NULL END) AS fr,
SUM(CASE WHEN year = 'SO' THEN players ELSE NULL END) AS so,
SUM(CASE WHEN year = 'JR' THEN players ELSE NULL END) AS jr,
SUM(CASE WHEN year = 'SR' THEN players ELSE NULL END) AS sr
FROM (
SELECT teams.conference AS conference,
players.year,
COUNT(1) AS players
FROM benn.college_football_players players
JOIN benn.college_football_teams teams
ON teams.school_name = players.school_name
GROUP BY 1,2
) sub
GROUP BY 1
ORDER BY 1
虽然这能使你很好地控制查询和进程,但是用Python操作实则要更加轻松。当你从年份和比赛中SELECT大学橄榄球运动员后,可以跳转到Notebook并运行DataFrame.pivot。你能根据列值重塑数据,因而可以重新排列结果集。
DataFrame.pivot
自连接
在很多情况下,你可能想要将一个表与其自身连接起来。要想创建自连接(self join),需要先输入此查询,以便为同一张表创建不同的引用名称。
SELECT DISTINCT japan_investments.company_name,
japan_investments.company_permalink
FROM tutorial.crunchbase_investments_part1 japan_investments
JOIN tutorial.crunchbase_investments_part1 gb_investments
ON japan_investments.company_name = gb_investments.company_name
AND gb_investments.investor_country_code = 'GBR'
AND gb_investments.funded_at > japan_investments.funded_at
WHERE japan_investments.investor_country_code = 'JPN'
ORDER BY 1
在pandas中,我们可以这样实现:
d = pd.DataFrame(['A','B','C'], columns = ['Column_1'])
d['Column_2'] = [1,2,1]
d.join(d.drop('Column_2', 1), on='Column_2', rsuffix='.1')
想自己尝试建立自连接吗?仿照这篇报告来撰写你的个人Mode报告吧!