我有一个Dask数据帧,看起来像这样:
I have a Dask dataframe that looks like this:
url referrer session_id ts customer url1 ref1 xxx 2017-09-15 00:00:00 a url2 ref2 yyy 2017-09-15 00:00:00 a url2 ref3 yyy 2017-09-15 00:00:00 a url1 ref1 xxx 2017-09-15 01:00:00 a url2 ref2 yyy 2017-09-15 01:00:00 a我想对url和timestamp上的数据进行分组,汇总列值并生成一个如下所示的数据框:
I want to group the data on url and timestamp, aggregate column values and produce a dataframe that would look like this instead:
customer url ts page_views visitors referrers a url1 2017-09-15 00:00:00 1 1 [ref1] a url2 2017-09-15 00:00:00 2 2 [ref2, ref3]在Spark SQL中,我可以按如下操作:
In Spark SQL, I can do this as follows:
select customer, url, ts, count(*) as page_views, count(distinct(session_id)) as visitors, collect_list(referrer) as referrers from df group by customer, url, ts我可以用Dask数据帧做任何事情吗?我试过了,但我只能分别计算汇总列,如下所示:
Is there any way I can do it with Dask dataframes? I tried, but I can only calculate the aggregated columns separately, as follows:
# group on timestamp (rounded) and url grouped = df.groupby(['ts', 'url']) # calculate page views (count rows in each group) page_views = grouped.size() # collect a list of referrer strings per group referrers = grouped['referrer'].apply(list, meta=('referrers', 'f8')) # count unique visitors (session ids) visitors = grouped['session_id'].count()但是我似乎找不到一种产生所需组合数据框的好方法。
But I can't seem to find a good way to produce a combined dataframe that I need.
推荐答案以下确实有效:
gb = df.groupby(['customer', 'url', 'ts']) gb.apply(lambda d: pd.DataFrame({'views': len(d), 'visitiors': d.session_id.count(), 'referrers': [d.referer.tolist()]})).reset_index()(假设访问者应为单如上面的sql所示)您可能希望定义输出的元。
(assuming visitors should be unique as per the sql above) You may wish to define the meta of the output.
更多推荐
聚合Dask数据框并生成聚合的数据框
发布评论