Spark-SQL-Python编程

系统 1463 0

使用Pycharm来实现Spark-SQL。

            
              from pyspark import Row
from pyspark.sql import SparkSession
from pyspark.sql.types import StructField, StringType, StructType

if __name__ == "__main__":
    spark = SparkSession\
            .builder\
            .appName("app name")\
            .master("local")\
            .getOrCreate()
    sc = spark.sparkContext
    line = sc.textFile("D:\\data\\demo.txt").map(lambda x: x.split('|'))
    # personRdd = line.map(lambda p: Row(id=p[0], name=p[1], age=int(p[2])))
    # personRdd_tmp = spark.createDataFrame(personRdd)
    # personRdd_tmp.show()

    #读取数据
    schemaString = "id name age"
    fields = list(map(lambda fieldName: StructField(fieldName, StringType(), nullable=True), schemaString.split(" ")))
    schema = StructType(fields)

    rowRDD = line.map(lambda attributes: Row(attributes[0], attributes[1],attributes[2]))
    peopleDF = spark.createDataFrame(rowRDD, schema)
    peopleDF.createOrReplaceTempView("people")
    results = spark.sql("SELECT * FROM people")
    results.rdd.map(lambda attributes: "name: " + attributes[0] + "," + "age:" + attributes[1]).foreach(print)

    # SQL风格语法
    # personRdd_tmp.registerTempTable("person")
    # spark.sql("select * from person where age >= 20 order by age desc limit 2").show()
	#方法风格语法
    # personRdd_tmp.select("name").show()
    # personRdd_tmp.select(personRdd_tmp['name'], personRdd_tmp['age'] + 1).show()
    # personRdd_tmp.filter(personRdd_tmp['age'] > 21).show()
    # personRdd_tmp.groupBy("age").count().show()

    
    # personRdd_tmp.createOrReplaceTempView("people")
    # sqlDF = spark.sql("SELECT * FROM people")
    # sqlDF.show()

    # personRdd_tmp.createGlobalTempView("people")
    # spark.sql("SELECT * FROM global_temp.people").show()
    #
    # spark.newSession().sql("SELECT * FROM global_temp.people").show()

	# 保存为指定格式
    # people = line.map(lambda p: (p[0],p[1], p[2].strip()))
    # schemaString = "id name age"
    #
    # fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()]
    # # # 通过StructType直接指定每个字段的schema
    # schema = StructType(fields)
    # schemaPeople = spark.createDataFrame(people, schema)
    # schemaPeople.createOrReplaceTempView("people")
    # results = spark.sql("SELECT * FROM people")
    # results.write.json("D:\\code\\hadoop\\data\\spark\\day4\\personout.txt")
    # results.write.save("D:\\code\\hadoop\\data\\spark\\day4\\personout1")

    # results.show()

            
          

 


更多文章、技术交流、商务合作、联系博主

微信扫码或搜索:z360901061

微信扫一扫加我为好友

QQ号联系: 360901061

您的支持是博主写作最大的动力,如果您喜欢我的文章,感觉我的文章对您有帮助,请用微信扫描下面二维码支持博主2元、5元、10元、20元等您想捐的金额吧,狠狠点击下面给点支持吧,站长非常感激您!手机微信长按不能支付解决办法:请将微信支付二维码保存到相册,切换到微信,然后点击微信右上角扫一扫功能,选择支付二维码完成支付。

【本文对您有帮助就好】

您的支持是博主写作最大的动力,如果您喜欢我的文章,感觉我的文章对您有帮助,请用微信扫描上面二维码支持博主2元、5元、10元、自定义金额等您想捐的金额吧,站长会非常 感谢您的哦!!!

发表我的评论
最新评论 总共0条评论