Python实现简单HTML表格解析的方法

本文实例讲述了Python实现简单HTML表格解析的方法。分享给大家供大家参考。具体分析如下:

这里依赖libxml2dom,确保首先安装!导入到你的脚步并调用parse_tables() 函数。

1. source = a string containing the source code you can pass in just the table or the entire page code

2. headers = a list of ints OR a list of strings
If the headers are ints this is for tables with no header, just list the 0 based index of the rows in which you want to extract data.
If the headers are strings this is for tables with header columns (with the tags) it will pull the information from the specified columns

3. The 0 based index of the table in the source code. If there are multiple tables and the table you want to parse is the third table in the code then pass in the number 2 here

It will return a list of lists. each inner list will contain the parsed information.

具体代码如下:

#The goal of table parser is to get specific information from specific
#columns in a table.
#Input: source code from a typical website
#Arguments: a list of headers the user wants to return
#Output: A list of lists of the data in each row
import libxml2dom
def parse_tables(source, headers, table_index):
  """parse_tables(string source, list headers, table_index)
    headers may be a list of strings if the table has headers defined or
    headers may be a list of ints if no headers defined this will get data
    from the rows index.
    This method returns a list of lists
    """
  #Determine if the headers list is strings or ints and make sure they
  #are all the same type
  j = 0
  print 'Printing headers: ',headers
  #route to the correct function
  #if the header type is int
  if type(headers[0]) == type(1):
    #run no_header function
    return no_header(source, headers, table_index)
  #if the header type is string
  elif type(headers[0]) == type('a'):
    #run the header_given function
    return header_given(source, headers, table_index)
  else:
    #return none if the headers aren't correct
    return None
#This function takes in the source code of the whole page a string list of
#headers and the index number of the table on the page. It returns a list of
#lists with the scraped information
def header_given(source, headers, table_index):
  #initiate a list to hole the return list
  return_list = []
  #initiate a list to hold the index numbers of the data in the rows
  header_index = []
  #get a document object out of the source code
  doc = libxml2dom.parseString(source,html=1)
  #get the tables from the document
  tables = doc.getElementsByTagName('table')
  try:
    #try to get focue on the desired table
    main_table = tables[table_index]
  except:
    #if the table doesn't exits then return an error
    return ['The table index was not found']
  #get a list of headers in the table
  table_headers = main_table.getElementsByTagName('th')
  #need a sentry value for the header loop
  loop_sentry = 0
  #loop through each header looking for matches
  for header in table_headers:
    #if the header is in the desired headers list
    if header.textContent in headers:
      #add it to the header_index
      header_index.append(loop_sentry)
    #add one to the loop_sentry
    loop_sentry+=1
  #get the rows from the table
  rows = main_table.getElementsByTagName('tr')
  #sentry value detecting if the first row is being viewed
  row_sentry = 0
  #loop through the rows in the table, skipping the first row
  for row in rows:
    #if row_sentry is 0 this is our first row
    if row_sentry == 0:
      #make the row_sentry not 0
      row_sentry = 1337
      continue
    #get all cells from the current row
    cells = row.getElementsByTagName('td')
    #initiate a list to append into the return_list
    cell_list = []
    #iterate through all of the header index's
    for i in header_index:
      #append the cells text content to the cell_list
      cell_list.append(cells[i].textContent)
    #append the cell_list to the return_list
    return_list.append(cell_list)
  #return the return_list
  return return_list
#This function takes in the source code of the whole page an int list of
#headers indicating the index number of the needed item and the index number
#of the table on the page. It returns a list of lists with the scraped info
def no_header(source, headers, table_index):
  #initiate a list to hold the return list
  return_list = []
  #get a document object out of the source code
  doc = libxml2dom.parseString(source, html=1)
  #get the tables from document
  tables = doc.getElementsByTagName('table')
  try:
    #Try to get focus on the desired table
    main_table = tables[table_index]
  except:
    #if the table doesn't exits then return an error
    return ['The table index was not found']
  #get all of the rows out of the main_table
  rows = main_table.getElementsByTagName('tr')
  #loop through each row
  for row in rows:
    #get all cells from the current row
    cells = row.getElementsByTagName('td')
    #initiate a list to append into the return_list
    cell_list = []
    #loop through the list of desired headers
    for i in headers:
      try:
        #try to add text from the cell into the cell_list
        cell_list.append(cells[i].textContent)
      except:
        #if there is an error usually an index error just continue
        continue
    #append the data scraped into the return_list
    return_list.append(cell_list)
  #return the return list
  return return_list

希望本文所述对大家的Python程序设计有所帮助。

(0)

相关推荐

  • python抓取某汽车网数据解析html存入excel示例

    1.某汽车网站地址 2.使用firefox查看后发现,此网站的信息未使用json数据,而是简单那的html页面而已 3.使用pyquery库中的PyQuery进行html的解析 页面样式: 复制代码 代码如下: def get_dealer_info(self):        """获取经销商信息"""        css_select = 'html body div.box div.news_wrapper div.main div.ne

  • 在Python中使用HTML模版的教程

    Web框架把我们从WSGI中拯救出来了.现在,我们只需要不断地编写函数,带上URL,就可以继续Web App的开发了. 但是,Web App不仅仅是处理逻辑,展示给用户的页面也非常重要.在函数中返回一个包含HTML的字符串,简单的页面还可以,但是,想想新浪首页的6000多行的HTML,你确信能在Python的字符串中正确地写出来么?反正我是做不到. 俗话说得好,不懂前端的Python工程师不是好的产品经理.有Web开发经验的同学都明白,Web App最复杂的部分就在HTML页面.HTML不仅要正

  • Python正则表达式匹配HTML页面编码

    html页面一般都会指定一个编码,如何获取到是处理html页面的第一步,因为错误的编码必然带来后面处理的问题.这里我用python的正则表达式写了个: import re a = ["<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />", '<meta http-equiv=Content-Type content="text/ht

  • python去除所有html标签的方法

    本文实例讲述了python去除所有html标签的方法.分享给大家供大家参考.具体分析如下: 这段代码可以用于去除文本里的字符串标签,不包括标签里面的内容 import re html='<a href="http://www.jb51.net">我们</a>,Python学习!' dr = re.compile(r'<[^>]+>',re.S) dd = dr.sub('',html) print(dd) 运行结果如下: 我们,Python学习

  • python处理html转义字符的方法详解

    本文实例讲述了python处理html转义字符的方法.分享给大家供大家参考,具体如下: 最近在用Python处理网页数据时,经常遇到一些html转义字符(也叫html字符实体),例如<> 等.字符实体一般是为了表示网页中的预留字符,比如>用>表示,防止被浏览器认为是标签,具体参考w3school的HTML 字符实体.虽然很有用,但是它们会极度影响对于网页数据的解析.为了处理这些转义字符,有如下解决方案: 1.使用HTMLParser处理 import HTMLParser html

  • 使用python解析xml成对应的html示例分享

    SAX将dd.xml解析成html.当然啦,如果得到了xml对应的xsl文件可以直接用libxml2将其转换成html. 复制代码 代码如下: #!/usr/bin/env python # -*- coding: utf-8 -*-#---------------------------------------#   程序:XML解析器#   版本:01.0#   作者:mupeng#   日期:2013-12-18#   语言:Python 2.7#   功能:将xml解析成对应的html#

  • python解析html开发库pyquery使用方法

    例如 复制代码 代码如下: <div id="info"><span><span class='pl'>导演</span>: <a href="/celebrity/1047989/" rel="v:directedBy">汤姆·提克威</a> / <a href="/celebrity/1161012/" rel="v:directedB

  • Python 正则表达式(转义问题)

    先说一个比较囧的事情:在写虾米音乐试听下载器的时候遇到一个问题,因为保存的文件都是用音乐的标题命名的,所以碰到一些诸如「対峙/out border」等含有非法字符(哼哼,说的就是你 →_→ Windows)的标题的时候,就会保存失败.于是我想起了迅雷的解决方法:把所有的非法字符替换成下划线. 于是就引入了正则表达式的使用.一番搜索囫囵吞枣后,我写下了这样的函数: 复制代码 代码如下: def sanitize_filename(filename): return re.sub('[\/:*?<>

  • Python转换HTML到Text纯文本的方法

    本文实例讲述了Python转换HTML到Text纯文本的方法.分享给大家供大家参考.具体分析如下: 今天项目需要将HTML转换为纯文本,去网上搜了一下,发现Python果然是神通广大,无所不能,方法是五花八门. 拿今天亲自试的两个方法举例,以方便后人: 方法一: 1. 安装nltk,可以去pipy装 (注:需要依赖以下包:numpy, PyYAML) 2.测试代码: 复制代码 代码如下: >>> import nltk  >>> aa = r''''' <html

  • Python文件读取的3种方法及路径转义

    1.文件的读取和显示 方法1: 复制代码 代码如下: f=open(r'G:\2.txt')  print f.read()  f.close() 方法2:   复制代码 代码如下: try:      t=open(r'G:\2.txt')      print t.read()  finally:      if t:         t.close() 方法3: 复制代码 代码如下: with open(r'g:\2.txt') as g:      for line in g:     

  • 用Python程序抓取网页的HTML信息的一个小实例

    抓取网页数据的思路有好多种,一般有:直接代码请求http.模拟浏览器请求数据(通常需要登录验证).控制浏览器实现数据抓取等.这篇不考虑复杂情况,放一个读取简单网页数据的小例子: 目标数据 将ittf网站上这个页面上所有这些选手的超链接保存下来. 数据请求 真的很喜欢符合人类思维的库,比如requests,如果是要直接拿网页文本,一句话搞定: doc = requests.get(url).text 解析html获得数据 以beautifulsoup为例,包含获取标签.链接,以及根据html层次结

随机推荐