25个值得收藏的Python文本处理案例

目录
  • 1提取PDF内容
  • 2提取Word内容
  • 3提取Web网页内容
  • 4读取Json数据
  • 5读取CSV数据
  • 6删除字符串中的标点符号
  • 7使用NLTK删除停用词
  • 8使用TextBlob更正拼写
  • 9使用NLTK和TextBlob的词标记化
  • 10使用NLTK提取句子单词或短语的词干列表
  • 11使用NLTK进行句子或短语词形还原
  • 12使用NLTK从文本文件中查找每个单词的频率
  • 13从语料库中创建词云
  • 14NLTK词法散布图
  • 15使用countvectorizer将文本转换为数字
  • 16使用TF-IDF创建文档术语矩阵
  • 17为给定句子生成N-gram
  • 18使用带有二元组的sklearnCountVectorize词汇规范
  • 19使用TextBlob提取名词短语
  • 20如何计算词-词共现矩阵
  • 21使用TextBlob进行情感分析
  • 22使用Goslate进行语言翻译
  • 23使用TextBlob进行语言检测和翻译
  • 24使用TextBlob获取定义和同义词
  • 25使用TextBlob获取反义词列表

1提取 PDF 内容

# pip install PyPDF2  安装 PyPDF2
import PyPDF2
from PyPDF2 import PdfFileReader
 
# Creating a pdf file object.
pdf = open("test.pdf", "rb")
 
# Creating pdf reader object.
pdf_reader = PyPDF2.PdfFileReader(pdf)
 
# Checking total number of pages in a pdf file.
print("Total number of Pages:", pdf_reader.numPages)
 
# Creating a page object.
page = pdf_reader.getPage(200)
 
# Extract data from a specific page number.
print(page.extractText())
 
# Closing the object.
pdf.close()

2提取 Word 内容

# pip install python-docx  安装 python-docx

import docx
 
 
def main():
    try:
        doc = docx.Document('test.docx')  # Creating word reader object.
        data = ""
        fullText = []
        for para in doc.paragraphs:
            fullText.append(para.text)
            data = '\n'.join(fullText)
 
        print(data)
 
    except IOError:
        print('There was an error opening the file!')
        return
 
 
if __name__ == '__main__':
    main()

3提取 Web 网页内容

# pip install bs4  安装 bs4

from urllib.request import Request, urlopen
from bs4 import BeautifulSoup
 
req = Request('http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1',
              headers={'User-Agent': 'Mozilla/5.0'})
 
webpage = urlopen(req).read()
 
# Parsing
soup = BeautifulSoup(webpage, 'html.parser')
 
# Formating the parsed html file
strhtm = soup.prettify()
 
# Print first 500 lines
print(strhtm[:500])
 
# Extract meta tag value
print(soup.title.string)
print(soup.find('meta', attrs={'property':'og:description'}))
 
# Extract anchor tag value
for x in soup.find_all('a'):
    print(x.string)
 
# Extract Paragraph tag value    
for x in soup.find_all('p'):
    print(x.text)

4读取 Json 数据

import requests
import json

r = requests.get("https://support.oneskyapp.com/hc/en-us/article_attachments/202761727/example_2.json")
res = r.json()

# Extract specific node content.
print(res['quiz']['sport'])

# Dump data as string
data = json.dumps(res)
print(data)

5读取 CSV 数据

import csv

with open('test.csv','r') as csv_file:
    reader =csv.reader(csv_file)
    next(reader) # Skip first row
    for row in reader:
        print(row)

6删除字符串中的标点符号

import re
import string
 
data = "Stuning even for the non-gamer: This sound track was beautiful!\
It paints the senery in your mind so well I would recomend\
it even to people who hate vid. game music! I have played the game Chrono \
Cross but out of all of the games I have ever played it has the best music! \
It backs away from crude keyboarding and takes a fresher step with grate\
guitars and soulful orchestras.\
It would impress anyone who cares to listen!"
 
# Methood 1 : Regex
# Remove the special charaters from the read string.
no_specials_string = re.sub('[!#?,.:";]', '', data)
print(no_specials_string)
 
 
# Methood 2 : translate()
# Rake translator object
translator = str.maketrans('', '', string.punctuation)
data = data.translate(translator)
print(data)

7使用 NLTK 删除停用词

from nltk.corpus import stopwords
 
 
data = ['Stuning even for the non-gamer: This sound track was beautiful!\
It paints the senery in your mind so well I would recomend\
it even to people who hate vid. game music! I have played the game Chrono \
Cross but out of all of the games I have ever played it has the best music! \
It backs away from crude keyboarding and takes a fresher step with grate\
guitars and soulful orchestras.\
It would impress anyone who cares to listen!']
 
# Remove stop words
stopwords = set(stopwords.words('english'))
 
output = []
for sentence in data:
    temp_list = []
    for word in sentence.split():
        if word.lower() not in stopwords:
            temp_list.append(word)
    output.append(' '.join(temp_list))
 
 
print(output)

8使用 TextBlob 更正拼写

from textblob import TextBlob

data = "Natural language is a cantral part of our day to day life, and it's so antresting to work on any problem related to langages."

output = TextBlob(data).correct()
print(output)

9使用 NLTK 和 TextBlob 的词标记化

import nltk
from textblob import TextBlob

data = "Natural language is a central part of our day to day life, and it's so interesting to work on any problem related to languages."

nltk_output = nltk.word_tokenize(data)
textblob_output = TextBlob(data).words

print(nltk_output)
print(textblob_output)

Output:

['Natural', 'language', 'is', 'a', 'central', 'part', 'of', 'our', 'day', 'to', 'day', 'life', ',', 'and', 'it', "'s", 'so', 'interesting', 'to', 'work', 'on', 'any', 'problem', 'related', 'to', 'languages', '.']
['Natural', 'language', 'is', 'a', 'central', 'part', 'of', 'our', 'day', 'to', 'day', 'life', 'and', 'it', "'s", 'so', 'interesting', 'to', 'work', 'on', 'any', 'problem', 'related', 'to', 'languages']

10使用 NLTK 提取句子单词或短语的词干列表

from nltk.stem import PorterStemmer
 
st = PorterStemmer()
text = ['Where did he learn to dance like that?',
        'His eyes were dancing with humor.',
        'She shook her head and danced away',
        'Alex was an excellent dancer.']
 
output = []
for sentence in text:
    output.append(" ".join([st.stem(i) for i in sentence.split()]))
 
for item in output:
    print(item)
 
print("-" * 50)
print(st.stem('jumping'), st.stem('jumps'), st.stem('jumped'))

Output:

where did he learn to danc like that?
hi eye were danc with humor.
she shook her head and danc away
alex wa an excel dancer.
--------------------------------------------------
jump jump jump

11使用 NLTK 进行句子或短语词形还原

from nltk.stem import WordNetLemmatizer

wnl = WordNetLemmatizer()
text = ['She gripped the armrest as he passed two cars at a time.',
        'Her car was in full view.',
        'A number of cars carried out of state license plates.']

output = []
for sentence in text:
    output.append(" ".join([wnl.lemmatize(i) for i in sentence.split()]))

for item in output:
    print(item)

print("*" * 10)
print(wnl.lemmatize('jumps', 'n'))
print(wnl.lemmatize('jumping', 'v'))
print(wnl.lemmatize('jumped', 'v'))

print("*" * 10)
print(wnl.lemmatize('saddest', 'a'))
print(wnl.lemmatize('happiest', 'a'))
print(wnl.lemmatize('easiest', 'a'))

Output:

She gripped the armrest a he passed two car at a time.
Her car wa in full view.
A number of car carried out of state license plates.
**********
jump
jump
jump
**********
sad
happy
easy

12使用 NLTK 从文本文件中查找每个单词的频率

import nltk
from nltk.corpus import webtext
from nltk.probability import FreqDist
 
nltk.download('webtext')
wt_words = webtext.words('testing.txt')
data_analysis = nltk.FreqDist(wt_words)
 
# Let's take the specific words only if their frequency is greater than 3.
filter_words = dict([(m, n) for m, n in data_analysis.items() if len(m) > 3])
 
for key in sorted(filter_words):
    print("%s: %s" % (key, filter_words[key]))
 
data_analysis = nltk.FreqDist(filter_words)
 
data_analysis.plot(25, cumulative=False)

Output:

[nltk_data] Downloading package webtext to
[nltk_data]     C:\Users\amit\AppData\Roaming\nltk_data...
[nltk_data]   Unzipping corpora\webtext.zip.
1989: 1
Accessing: 1
Analysis: 1
Anyone: 1
Chapter: 1
Coding: 1
Data: 1
...

13从语料库中创建词云

import nltk
from nltk.corpus import webtext
from nltk.probability import FreqDist
from wordcloud import WordCloud
import matplotlib.pyplot as plt
 
nltk.download('webtext')
wt_words = webtext.words('testing.txt')  # Sample data
data_analysis = nltk.FreqDist(wt_words)
 
filter_words = dict([(m, n) for m, n in data_analysis.items() if len(m) > 3])
 
wcloud = WordCloud().generate_from_frequencies(filter_words)
 
# Plotting the wordcloud
plt.imshow(wcloud, interpolation="bilinear")
 
plt.axis("off")
(-0.5, 399.5, 199.5, -0.5)
plt.show()

14NLTK 词法散布图

import nltk
from nltk.corpus import webtext
from nltk.probability import FreqDist
from wordcloud import WordCloud
import matplotlib.pyplot as plt
 
words = ['data', 'science', 'dataset']
 
nltk.download('webtext')
wt_words = webtext.words('testing.txt')  # Sample data
 
points = [(x, y) for x in range(len(wt_words))
          for y in range(len(words)) if wt_words[x] == words[y]]
 
if points:
    x, y = zip(*points)
else:
    x = y = ()
 
plt.plot(x, y, "rx", scalex=.1)
plt.yticks(range(len(words)), words, color="b")
plt.ylim(-1, len(words))
plt.title("Lexical Dispersion Plot")
plt.xlabel("Word Offset")
plt.show()

15使用 countvectorizer 将文本转换为数字

import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
 
# Sample data for analysis
data1 = "Java is a language for programming that develops a software for several platforms. A compiled code or bytecode on Java application can run on most of the operating systems including Linux, Mac operating system, and Linux. Most of the syntax of Java is derived from the C++ and C languages."
data2 = "Python supports multiple programming paradigms and comes up with a large standard library, paradigms included are object-oriented, imperative, functional and procedural."
data3 = "Go is typed statically compiled language. It was created by Robert Griesemer, Ken Thompson, and Rob Pike in 2009. This language offers garbage collection, concurrency of CSP-style, memory safety, and structural typing."
 
df1 = pd.DataFrame({'Java': [data1], 'Python': [data2], 'Go': [data2]})
 
# Initialize
vectorizer = CountVectorizer()
doc_vec = vectorizer.fit_transform(df1.iloc[0])
 
# Create dataFrame
df2 = pd.DataFrame(doc_vec.toarray().transpose(),
                   index=vectorizer.get_feature_names())
 
# Change column headers
df2.columns = df1.columns
print(df2)

Output:

Go  Java  Python
and           2     2       2
application   0     1       0
are           1     0       1
bytecode      0     1       0
can           0     1       0
code          0     1       0
comes         1     0       1
compiled      0     1       0
derived       0     1       0
develops      0     1       0
for           0     2       0
from          0     1       0
functional    1     0       1
imperative    1     0       1
...

16使用 TF-IDF 创建文档术语矩阵

import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer

# Sample data for analysis
data1 = "Java is a language for programming that develops a software for several platforms. A compiled code or bytecode on Java application can run on most of the operating systems including Linux, Mac operating system, and Linux. Most of the syntax of Java is derived from the C++ and C languages."
data2 = "Python supports multiple programming paradigms and comes up with a large standard library, paradigms included are object-oriented, imperative, functional and procedural."
data3 = "Go is typed statically compiled language. It was created by Robert Griesemer, Ken Thompson, and Rob Pike in 2009. This language offers garbage collection, concurrency of CSP-style, memory safety, and structural typing."

df1 = pd.DataFrame({'Java': [data1], 'Python': [data2], 'Go': [data2]})

# Initialize
vectorizer = TfidfVectorizer()
doc_vec = vectorizer.fit_transform(df1.iloc[0])

# Create dataFrame
df2 = pd.DataFrame(doc_vec.toarray().transpose(),
                   index=vectorizer.get_feature_names())

# Change column headers
df2.columns = df1.columns
print(df2)

Output:

Go      Java    Python
and          0.323751  0.137553  0.323751
application  0.000000  0.116449  0.000000
are          0.208444  0.000000  0.208444
bytecode     0.000000  0.116449  0.000000
can          0.000000  0.116449  0.000000
code         0.000000  0.116449  0.000000
comes        0.208444  0.000000  0.208444
compiled     0.000000  0.116449  0.000000
derived      0.000000  0.116449  0.000000
develops     0.000000  0.116449  0.000000
for          0.000000  0.232898  0.000000
...

17为给定句子生成 N-gram

自然语言工具包:NLTK

import nltk
from nltk.util import ngrams

# Function to generate n-grams from sentences.
def extract_ngrams(data, num):
    n_grams = ngrams(nltk.word_tokenize(data), num)
    return [ ' '.join(grams) for grams in n_grams]

data = 'A class is a blueprint for the object.'

print("1-gram: ", extract_ngrams(data, 1))
print("2-gram: ", extract_ngrams(data, 2))
print("3-gram: ", extract_ngrams(data, 3))
print("4-gram: ", extract_ngrams(data, 4))

文本处理工具:TextBlob

from textblob import TextBlob
 
# Function to generate n-grams from sentences.
def extract_ngrams(data, num):
    n_grams = TextBlob(data).ngrams(num)
    return [ ' '.join(grams) for grams in n_grams]
 
data = 'A class is a blueprint for the object.'
 
print("1-gram: ", extract_ngrams(data, 1))
print("2-gram: ", extract_ngrams(data, 2))
print("3-gram: ", extract_ngrams(data, 3))
print("4-gram: ", extract_ngrams(data, 4))

Output:

1-gram:  ['A', 'class', 'is', 'a', 'blueprint', 'for', 'the', 'object']
2-gram:  ['A class', 'class is', 'is a', 'a blueprint', 'blueprint for', 'for the', 'the object']
3-gram:  ['A class is', 'class is a', 'is a blueprint', 'a blueprint for', 'blueprint for the', 'for the object']
4-gram:  ['A class is a', 'class is a blueprint', 'is a blueprint for', 'a blueprint for the', 'blueprint for the object']

18使用带有二元组的 sklearn CountVectorize 词汇规范

import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
 
# Sample data for analysis
data1 = "Machine language is a low-level programming language. It is easily understood by computers but difficult to read by people. This is why people use higher level programming languages. Programs written in high-level languages are also either compiled and/or interpreted into machine language so that computers can execute them."
data2 = "Assembly language is a representation of machine language. In other words, each assembly language instruction translates to a machine language instruction. Though assembly language statements are readable, the statements are still low-level. A disadvantage of assembly language is that it is not portable, because each platform comes with a particular Assembly Language"
 
df1 = pd.DataFrame({'Machine': [data1], 'Assembly': [data2]})
 
# Initialize
vectorizer = CountVectorizer(ngram_range=(2, 2))
doc_vec = vectorizer.fit_transform(df1.iloc[0])
 
# Create dataFrame
df2 = pd.DataFrame(doc_vec.toarray().transpose(),
                   index=vectorizer.get_feature_names())
 
# Change column headers
df2.columns = df1.columns
print(df2)

Output:

Assembly  Machine
also either                    0        1
and or                         0        1
are also                       0        1
are readable                   1        0
are still                      1        0
assembly language              5        0
because each                   1        0
but difficult                  0        1
by computers                   0        1
by people                      0        1
can execute                    0        1
...

19使用 TextBlob 提取名词短语

from textblob import TextBlob

#Extract noun
blob = TextBlob("Canada is a country in the northern part of North America.")

for nouns in blob.noun_phrases:
    print(nouns)

Output:

canada
northern part
america

20如何计算词-词共现矩阵

import numpy as np
import nltk
from nltk import bigrams
import itertools
import pandas as pd
 
 
def generate_co_occurrence_matrix(corpus):
    vocab = set(corpus)
    vocab = list(vocab)
    vocab_index = {word: i for i, word in enumerate(vocab)}
 
    # Create bigrams from all words in corpus
    bi_grams = list(bigrams(corpus))
 
    # Frequency distribution of bigrams ((word1, word2), num_occurrences)
    bigram_freq = nltk.FreqDist(bi_grams).most_common(len(bi_grams))
 
    # Initialise co-occurrence matrix
    # co_occurrence_matrix[current][previous]
    co_occurrence_matrix = np.zeros((len(vocab), len(vocab)))
 
    # Loop through the bigrams taking the current and previous word,
    # and the number of occurrences of the bigram.
    for bigram in bigram_freq:
        current = bigram[0][1]
        previous = bigram[0][0]
        count = bigram[1]
        pos_current = vocab_index[current]
        pos_previous = vocab_index[previous]
        co_occurrence_matrix[pos_current][pos_previous] = count
    co_occurrence_matrix = np.matrix(co_occurrence_matrix)
 
    # return the matrix and the index
    return co_occurrence_matrix, vocab_index
 
 
text_data = [['Where', 'Python', 'is', 'used'],
             ['What', 'is', 'Python' 'used', 'in'],
             ['Why', 'Python', 'is', 'best'],
             ['What', 'companies', 'use', 'Python']]
 
# Create one list using many lists
data = list(itertools.chain.from_iterable(text_data))
matrix, vocab_index = generate_co_occurrence_matrix(data)
 
 
data_matrix = pd.DataFrame(matrix, index=vocab_index,
                             columns=vocab_index)
print(data_matrix)

Output:

best  use  What  Where  ...    in   is  Python  used
best         0.0  0.0   0.0    0.0  ...   0.0  0.0     0.0   1.0
use          0.0  0.0   0.0    0.0  ...   0.0  1.0     0.0   0.0
What         1.0  0.0   0.0    0.0  ...   0.0  0.0     0.0   0.0
Where        0.0  0.0   0.0    0.0  ...   0.0  0.0     0.0   0.0
Pythonused   0.0  0.0   0.0    0.0  ...   0.0  0.0     0.0   1.0
Why          0.0  0.0   0.0    0.0  ...   0.0  0.0     0.0   1.0
companies    0.0  1.0   0.0    1.0  ...   1.0  0.0     0.0   0.0
in           0.0  0.0   0.0    0.0  ...   0.0  0.0     1.0   0.0
is           0.0  0.0   1.0    0.0  ...   0.0  0.0     0.0   0.0
Python       0.0  0.0   0.0    0.0  ...   0.0  0.0     0.0   0.0
used         0.0  0.0   1.0    0.0  ...   0.0  0.0     0.0   0.0
 
[11 rows x 11 columns]

21使用 TextBlob 进行情感分析

from textblob import TextBlob

def sentiment(polarity):
    if blob.sentiment.polarity < 0:
        print("Negative")
    elif blob.sentiment.polarity > 0:
        print("Positive")
    else:
        print("Neutral")

blob = TextBlob("The movie was excellent!")
print(blob.sentiment)
sentiment(blob.sentiment.polarity)

blob = TextBlob("The movie was not bad.")
print(blob.sentiment)
sentiment(blob.sentiment.polarity)

blob = TextBlob("The movie was ridiculous.")
print(blob.sentiment)
sentiment(blob.sentiment.polarity)

Output:

Sentiment(polarity=1.0, subjectivity=1.0)
Positive
Sentiment(polarity=0.3499999999999999, subjectivity=0.6666666666666666)
Positive
Sentiment(polarity=-0.3333333333333333, subjectivity=1.0)
Negative

22使用 Goslate 进行语言翻译

import goslate

text = "Comment vas-tu?"

gs = goslate.Goslate()

translatedText = gs.translate(text, 'en')
print(translatedText)

translatedText = gs.translate(text, 'zh')
print(translatedText)

translatedText = gs.translate(text, 'de')
print(translatedText)

23使用 TextBlob 进行语言检测和翻译

from textblob import TextBlob
 
blob = TextBlob("Comment vas-tu?")
 
print(blob.detect_language())
 
print(blob.translate(to='es'))
print(blob.translate(to='en'))
print(blob.translate(to='zh'))

Output:

fr
¿Como estas tu?
How are you?
你好吗?

24使用 TextBlob 获取定义和同义词

from textblob import TextBlob
from textblob import Word
 
text_word = Word('safe')
 
print(text_word.definitions)
 
synonyms = set()
for synset in text_word.synsets:
    for lemma in synset.lemmas():
        synonyms.add(lemma.name())
         
print(synonyms)

Output:

['strongbox where valuables can be safely kept', 'a ventilated or refrigerated cupboard for securing provisions from pests', 'contraceptive device consisting of a sheath of thin rubber or latex that is worn over the penis during intercourse', 'free from danger or the risk of harm', '(of an undertaking) secure from risk', 'having reached a base without being put out', 'financially sound']
{'secure', 'rubber', 'good', 'safety', 'safe', 'dependable', 'condom', 'prophylactic'}

25使用 TextBlob 获取反义词列表

from textblob import TextBlob
from textblob import Word

text_word = Word('safe')

antonyms = set()
for synset in text_word.synsets:
    for lemma in synset.lemmas():        
        if lemma.antonyms():
            antonyms.add(lemma.antonyms()[0].name())        

print(antonyms)

Output:

{'dangerous', 'out'}

到此这篇关于25个值得收藏的Python文本处理案例的文章就介绍到这了,更多相关Python文本处理案例内容请搜索我们以前的文章或继续浏览下面的相关文章希望大家以后多多支持我们!

(0)

相关推荐

  • python文本数据处理学习笔记详解

    最近越发感觉到限制我对Python运用.以及读懂别人代码的地方,大多是在于对数据的处理能力. 其实编程本质上就是数据处理,怎么把文本数据.图像数据,通过python读入.切分等,变成一个N维矩阵,然后再带入别人的模型,bingo~跑出来一个结果.结果当然也是一个矩阵或向量的形式. 所以说,之所以对很多模型.代码束手无策,其实还是没有掌握好数据处理的"屠龙宝刀",无法对海量数据进行"庖丁解牛"般的处理.因此,我想以一个别人代码中的一段为例,仔细琢磨文本数据处理的精妙之

  • python中文本字符处理的简单方法记录

    今天,跟大家分享一下我做小项目时想出来的文本字符处理的方法,希望能对大家有所帮助. 完整代码: strings = "我,是'C|S;D|N!的:程[序]员#M,r&.:P'a#n?_&学?狂"#将字符串设置好 def String_Process(string):#定义一个字符处理函数,设置参数string,是有待处理的字符串. print("python使我快乐!!") print("未处理的字符串:",string) var

  • python文件处理笔记之文本文件

    目录 1. 建立文件 1.1 文本文件代码实现 1.2 代码编写分析 2. 基本的读写文件 2.1 用文件对象write(s)方法写内容 2.2 用文件对象read()方法读取内容 2.3 连续用read()方法.write()方法操作文件 3. 复杂的读写文件 1.一次写入多行 2.一次读一行 3.以列表格式读取多行 4.连续读特定字节数量的内容 5.在指定位置读内容 4. 文件异常处理 5. 文件与路径 5.1 与路径相关的操作 获取程序运行的当前路径 5.2 动态指定路径下建立新文件 总结

  • python演示解答正则为什么是最强文本处理工具

    正则表达式,又称规则表达式.(英语:Regular Expression,在代码中常简写为regex.regexp或RE),计算机科学的一个概念.正则表达式通常被用来检索.替换那些符合某个模式(规则)的文本. Python作为一门数据处理语言,经常使用正则匹配段落,比如爬虫爬取数据时.正则表达式是Python内置的模块,不需要额外安装. 今天来给大家分享一份比较全面的Python正则表达式宝典,学会之后,你将掌握正则表达式的各种应用场景. re模块 re (Regular Expression简

  • Python文本处理简单易懂方法解析

    这篇文章主要介绍了Python文本处理简单易懂方法解析,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友可以参考下 自从认识了python这门语言,所有的事情好像变得容易了,作为小白,逗汁儿今天就为大家总结一下python的文本处理的一些小方法. 话不多说,代码撸起来. python大小写字符互换 在进行大小写互换时,常用到的方法有4种,upper().lower().capitalize() 和title(). str = "www.dataCASTLE.

  • Python编解码问题及文本文件处理方法详解

    编解码器 在字符与字节之间的转换过程称为编解码,Python自带了超过100种编解码器,比如: ascii(英文体系) gb2312(中文体系) utf-8(全球通用) latin1 utf-16 编解码器一般有多个别名,比如utf8.utf-8.U8. 这些编解码器可以传给open().str.encode().bytes.decode()等函数的encoding参数. UnicodeEncodeError 多数非UTF编解码器(比如cp437)只能处理Unicode字符的一小部分子集.把字符

  • 25个值得收藏的Python文本处理案例

    目录 1提取PDF内容 2提取Word内容 3提取Web网页内容 4读取Json数据 5读取CSV数据 6删除字符串中的标点符号 7使用NLTK删除停用词 8使用TextBlob更正拼写 9使用NLTK和TextBlob的词标记化 10使用NLTK提取句子单词或短语的词干列表 11使用NLTK进行句子或短语词形还原 12使用NLTK从文本文件中查找每个单词的频率 13从语料库中创建词云 14NLTK词法散布图 15使用countvectorizer将文本转换为数字 16使用TF-IDF创建文档术

  • 分享6 个值得收藏的 Python 代码

    目录 1.类有两个方法,一个是 new,一个是 init,有什么区别,哪个会先执行呢? 2.map 函数返回的对象 3.正则表达式中 compile 是否多此一举? 4.[[1,2],[3,4],[5,6]]一行代码展开该列表,得出[1,2,3,4,5,6] 5.一行代码将字符串 "->" 插入到 "abcdefg"中每个字符的中间 6.zip 函数 1.类有两个方法,一个是 new,一个是 init,有什么区别,哪个会先执行呢? class test(obj

  • 值得收藏,Python 开发中的高级技巧

    Python 开发中有哪些高级技巧?这是知乎上一个问题,我总结了一些常见的技巧在这里,可能谈不上多高级,但掌握这些至少可以让你的代码看起来 Pythonic 一点.如果你还在按照类C语言的那套风格来写的话,在 code review 恐怕会要被吐槽了. 列表推导式 >>> chars = [ c for c in 'python' ] >>> chars ['p', 'y', 't', 'h', 'o', 'n'] 字典推导式 >>> dict1 =

  • 值得收藏的10道python 面试题

    Q1:PEP8是什么?Python之禅(import this)是什么? 这题是考察你对编码规范的认识,无论是自己写代码还是在团队中写代码,了解并遵循代码规范是很基础的要求.企业中在提交代码后都会有对应的工具来对代码进行检查,比如 pep8.flake8.pylint 等,但是 PEP 8 是什么一定要了解. 即 Style Guide for Python Code(Python编码风格指南).如果面试时不知道什么是 PEP 8 ,那聊起来想必不会很愉快.速战速决的面试,如果不是你把面试官"秒

  • Python制作数据预测集成工具(值得收藏)

    大数据预测是大数据最核心的应用,是它将传统意义的预测拓展到"现测".大数据预测的优势体现在,它把一个非常困难的预测问题,转化为一个相对简单的描述问题,而这是传统小数据集根本无法企及的.从预测的角度看,大数据预测所得出的结果不仅仅是用于处理现实业务的简单.客观的结论,更是能用于帮助企业经营的决策. 在过去,人们的决策主要是依赖 20% 的结构化数据,而大数据预测则可以利用另外 80% 的非结构化数据来做决策.大数据预测具有更多的数据维度,更快的数据频度和更广的数据宽度.与小数据时代相比,

  • 值得收藏的asp.net基础学习笔记

    值得收藏的asp.net基础学习笔记,分享给大家. 1.概论 浏览器-服务器 B/S 浏览的 浏览器和服务器之间的交互,形成上网B/S模式 对于HTML传到服务器  交给服务器软件(IIS)  服务器软件直接读取静态页面代码,然后返回浏览器 对于ASPX传达服务器  交给服务器软件(IIS)   IIS发现自己处理不了aspx的文件,就去映射表根据后缀名里找到响应的处理程序(isapi,服务器扩展程序) 问题:IIS如何调用可扩展程序? 答:可扩展程序首先就是按照IIS提供的借口实现代码,所以I

  • Shell脚本编写的八条可靠建议(值得收藏)

    这八个建议,来源于键者几年来编写 shell 脚本的一些经验和教训.事实上开始写的时候还不止这几条,后来思索再三,去掉几条无关痛痒的,最后剩下八条.毫不夸张地说,每条都是精挑细选的,虽然有几点算是老生常谈了. 1. 指定bash shell 脚本的第一行,#!之后应该是什么?如果拿这个问题去问别人,不同的人的回答可能各不相同. 我见过/usr/bin/env bash,也见过/bin/bash,还有/usr/bin/bash,还有/bin/sh,还有/usr/bin/env sh.这算是编程界的

  • 分享Python文本生成二维码实例

    本文实例分享了Python文本生成二维码的详细代码,供大家参考,具体内容如下 测试一:将文本生成白底黑字的二维码图片 测试二:将文本生成带logo的二维码图片 #coding:utf-8 ''' Python生成二维码 v1.0 主要将文本生成二维码图片 测试一:将文本生成白底黑字的二维码图片 测试二:将文本生成带logo的二维码图片 ''' __author__ = 'Xue' import qrcode from PIL import Image import os #生成二维码图片 def

  • 值得收藏的27个Linux文档编辑命令

    Linux col命令 Linux col命令用于过滤控制字符. 在许多UNIX说明文件里,都有RLF控制字符.当我们运用shell特殊字符">"和">>",把说明文件的内容输出成纯文本文件时,控制字符会变成乱码,col指令则能有效滤除这些控制字符. Linux colrm命令 Linux colrm命令用于滤掉指定的行. colrm指令从标准输入设备读取书记,转而输出到标准输出设备.如果不加任何参数,则该指令不会过滤任何一行. Linux com

  • 15个Pythonic的代码示例(值得收藏)

    Python由于语言的简洁性,让我们以人类思考的方式来写代码,新手更容易上手,老鸟更爱不释手. 要写出 Pythonic(优雅的.地道的.整洁的)代码,还要平时多观察那些大牛代码,Github 上有很多非常优秀的源代码值得阅读,比如:requests.flask.tornado,这里小明收集了一些常见的 Pythonic 写法,帮助你养成写优秀代码的习惯. 01. 变量交换 Bad tmp = a a = b b = tmp Pythonic a,b = b,a 02. 列表推导 Bad my_

随机推荐