R 语言中的正则表达式的使用、字符串的处理、网络数据爬取及网络图的绘制
为了让大家更好的理解下面的内容,欢迎各位培训班会员参加明晚 8 点的直播课 「使用 R 语言处理字符串」
本课程是系列课程「R 语言数据科学」的最新课时,课程主页在这里(点击文末的阅读原文即可跳转):https://rstata.duanshu.com/#/brief/course/229b770183e44fbbb64133df929818ec
明晚的课会讲解如下内容:
字符串基础; 正则表达式; 案例:网页数据爬取中的字符串数据处理和正则表达式的使用 使用 ggraph 包绘制网络图 案例:匹配手机号 常用正则表达式
在之前的课程中我们学习了很多关于 R 语言处理数据的技巧,今天我们再一起针对性的学习下 R 语言中字符串相关的数据处理。
1 加载 tidyverse
首先我们加载 tidyverse 包,加载这个包的时候会自动加载一些关于数据处理的包以及检查包之间的函数名称冲突问题:
library(tidyverse)
2 字符串基础
下面是两个简单的字符串标量:
(string1 <- "这是一个字符串")
#> [1] "这是一个字符串"
(string2 <- '如果字符串里面包含"双引号",需要使用单引号括起来')
#> [1] "如果字符串里面包含\"双引号\",需要使用单引号括起来"
或者也可以这样:
(string3 <- "如果字符串里面包含\"双引号\",也可以在使用转义字符后用双引号括起来")
#> [1] "如果字符串里面包含\"双引号\",也可以在使用转义字符后用双引号括起来"
(string4 <- "如果字符串里面包含'单引号',需要使用双引号括起来")
#> [1] "如果字符串里面包含'单引号',需要使用双引号括起来"
(string5 <- '如果字符串里面包含\'单引号\',也可以在使用转义字符后用单引号括起来来')
#> [1] "如果字符串里面包含'单引号',也可以在使用转义字符后用单引号括起来来"
两个常见转义字符:
\n
:换行;\t
:制表符;
2.1 字符串的长度
c("a", "词语", "R 数据科学")
#> [1] "a" "词语" "R 数据科学"
str_length(c("a", "词语", "R 数据科学"))
#> [1] 1 2 6
length(c("a", "词语", "R 数据科学"))
#> [1] 3
2.2 合并字符串
str_c("x", "y", "z")
#> [1] "xyz"
str_c(c("x", "y", "z"), c("a", "b", "c"))
#> [1] "xa" "yb" "zc"
str_c("x", "y", "z", sep = ", ")
#> [1] "x, y, z"
# NA 不会参与运算
x <- c("abc", NA)
str_c("|-", x, "-|")
#> [1] "|-abc-|" NA
# 但是可以把 NA 转成 "NA" 运算
str_replace_na(x)
#> [1] "abc" "NA"
str_c("|-", str_replace_na(x), "-|")
#> [1] "|-abc-|" "|-NA-|"
# 如果两个向量长度不一致,短的会被循环使用
# str_c(c("x", "y"), c("a", "b", "c")) 新版本的 stringr 中会报错
str_c("前缀-", c("a", "b", "c"), "-后缀")
#> [1] "前缀-a-后缀" "前缀-b-后缀" "前缀-c-后缀"
# collapse 参数可以用于把向量连接成字符串
str_c(c("x", "y", "z"), collapse = ", ")
#> [1] "x, y, z"
str_c(c("x", "y", "z"), c("a", "b", "c"), collapse = ", ")
#> [1] "xa, yb, zc"
字符串的连接我们更常用 paste() 和 paste0() 函数:
paste(c("x", "y", "z"))
#> [1] "x" "y" "z"
paste(c("x", "y", "z"), collapse = ", ")
#> [1] "x, y, z"
paste("x", "y", "z")
#> [1] "x y z"
paste("x", "y", "z", sep = ", ")
#> [1] "x, y, z"
paste0("x", "y", "z")
#> [1] "xyz"
2.3 提取子字符串
x <- c("苹果味栗子", "Banana", "Pear")
# 第 1~3 个
str_sub(x, 1, 3)
#> [1] "苹果味" "Ban" "Pea"
# 倒数第 3~1 个
str_sub(x, -3, -1)
#> [1] "味栗子" "ana" "ear"
# 把首字母小写
str_sub(x, 1, 1) <- str_to_lower(str_sub(x, 1, 1))
x
#> [1] "苹果味栗子" "banana" "pear"
2.4 字符串格式转换
str_to_lower(c("Abc", "XYZ"))
#> [1] "abc" "xyz"
str_to_upper(c("abc", "xyz"))
#> [1] "ABC" "XYZ"
str_to_sentence(c("abc is abc", "xyz is xyz"))
#> [1] "Abc is abc" "Xyz is xyz"
str_to_title(c("abc is abc", "xyz is xyz"))
#> [1] "Abc Is Abc" "Xyz Is Xyz"
# 字符串排序
str_sort(sample(letters, 20))
#> [1] "a" "c" "e" "g" "h" "i" "j" "k" "l" "n" "o" "p" "q" "r" "s" "t" "u" "w" "y"
#> [20] "z"
2.5 练习
使用 str_length() 和 str_sub() 编写一个用于提取字符串最中间字符的函数。
str_middle <- function(x) {
l = str_length(x)
if (l %% 2 == 0) {
return(str_sub(x, l/2, l/2 + 1))
}
else {
return(str_sub(x, (l + 1)/2, (l + 1)/2))
}
}
str_middle("abc")
#> [1] "b"
str_middle("abcd")
#> [1] "bc"
str_wrap() 函数有什么作用?
?str_wrap()
观察示例:
thanks_path <- file.path(R.home("doc"), "THANKS")
thanks <- str_c(readLines(thanks_path), collapse = "\n")
# ?word()
thanks <- word(thanks, 1, 3, sep = fixed("\n\n"))
cat(str_wrap(thanks), "\n")
cat(str_wrap(thanks, width = 40), "\n")
cat(str_wrap(thanks, width = 60, indent = 2), "\n")
cat(str_wrap(thanks, width = 60, exdent = 2), "\n")
cat(str_wrap(thanks, width = 0, exdent = 2), "\n")
str_trim() 有什么功能?
观察示例:
?str_trim()
str_trim(" String with trailing and leading white space\t")
#> [1] "String with trailing and leading white space"
str_trim("\n\nString with trailing and leading white space\n\n")
#> [1] "String with trailing and leading white space"
str_squish(" String with trailing, middle, and leading white space\t")
#> [1] "String with trailing, middle, and leading white space"
str_squish("\n\nString with excess, trailing and leading white space\n\n")
#> [1] "String with excess, trailing and leading white space"
3 正则表达式
在之前的课程中其实我们多次使用过正则表达式,几乎每次数据爬取都需要使用一两次。下面我们来系统的学习下:
3.1 基础的匹配
x <- c("apple", "banana", "pear")
str_view(x, "an")
#> [2] │ b<an><an>a
str_view(x, ".a.")
#> [2] │ <ban>ana
#> [3] │ p<ear>
str_view(c("abc", "a.c", "bef"), "a\\.c")
#> [2] │ <a.c>
str_view(c("abc", "a.c", "bef"), "a.c")
#> [1] │ <abc>
#> [2] │ <a.c>
3.2 限定符
x <- c("apple", "banana", "pear")
# 匹配开头的 a
str_view(x, "^a")
#> [1] │ <a>pple
# 匹配结尾的 a
str_view(x, "a$")
#> [2] │ banan<a>
x <- c("apple pie", "apple", "apple apple")
str_view(x, "apple")
#> [1] │ <apple> pie
#> [2] │ <apple>
#> [3] │ <apple> <apple>
# 只匹配 apple
str_view(x, "^apple$")
#> [2] │ <apple>
3.3 练习
stringr::words 是一个字符串向量,匹配下面的要求:
以 y 开头的; 以 x 结尾的; 长度为 3 个字母的(不要使用 str_length()); 有超过 7 个字母的(不要使用 str_length())。
str_view(stringr::words, "^y")
str_view(stringr::words, "x$")
str_view(stringr::words, "^[a-z]{3}$")
# 比较
str_view(stringr::words, "^[a-z]{7,}$")
str_view(stringr::words, "^[a-z]{7}[a-z]*$")
str_view(stringr::words, "^[a-z]{7}[a-z]+$")
3.4 匹配字符和数字
\d
:匹配数字;\s
:匹配任意空白字符,例如空格、制表符和新行;[abc]
:匹配 a,b 或 c;[^abc]
:匹配除了 a,b 和 c;
注意在 R 语言里面,正则表达式的 \d
需要使用 \\d
,\s
需要使用 \\s
。
str_view(c("abc", "a.c", "a*c", "a c"), "a[.]c")
#> [2] │ <a.c>
str_view(c("abc", "a.c", "a*c", "a c"), ".[*]c")
#> [3] │ <a*c>
str_view(c("abc", "a.c", "a*c", "a c"), ".[.*]c")
#> [2] │ <a.c>
#> [3] │ <a*c>
str_view(c("abc", "a.c", "a*c", "a c"), "a[ ]")
#> [4] │ <a >c
可以使用括号消除复杂正则表达式的歧义:
str_view(c("grey", "gray"), "gre|ay")
#> [1] │ <gre>y
#> [2] │ gr<ay>
str_view(c("grey", "gray"), "gr(e|a)y")
#> [1] │ <grey>
#> [2] │ <gray>
3.5 重复
?
:匹配 0 或者 1 次;+
:匹配 1 或者多次;*
:匹配 0 或者多次;{n}
: 匹配 n 次;{n,}
: 匹配 >= n 次;{,m}
: 匹配最多 m 次;{n,m}
: 匹配 [n, m] 次;
x <- "1888 is the longest year in Roman numerals: MDCCCLXXXVIII"
str_view(x, "CC?")
#> [1] │ 1888 is the longest year in Roman numerals: MD<CC><C>LXXXVIII
str_view(x, "CC+")
#> [1] │ 1888 is the longest year in Roman numerals: MD<CCC>LXXXVIII
str_view(x, 'C[LX]+')
#> [1] │ 1888 is the longest year in Roman numerals: MDCC<CLXXX>VIII
str_view(x, "C{2}")
#> [1] │ 1888 is the longest year in Roman numerals: MD<CC>CLXXXVIII
str_view(x, "C{2,}")
#> [1] │ 1888 is the longest year in Roman numerals: MD<CCC>LXXXVIII
str_view(x, "C{2,3}")
#> [1] │ 1888 is the longest year in Roman numerals: MD<CCC>LXXXVIII
# 默认情况下使用的是贪婪模式,也就是会匹配符合要求的最长字符串,也可以通过在最后加个 ? 表示使用懒惰模式,这个时候会匹配符合要求的最短字符串:
str_view(x, 'C{2,3}?')
#> [1] │ 1888 is the longest year in Roman numerals: MD<CC>CLXXXVIII
str_view(x, 'C[LX]+?')
#> [1] │ 1888 is the longest year in Roman numerals: MDCC<CL>XXXVIII
3.6 分组与反向引用
上面我们提到说括号可以消除复杂正则表达式的歧义,另外括号还会创建一个编号辅助字符串的匹配,例如下面的匹配中,(..)
会匹配到两个连着的字符,并且将这两个连着的字符使用 1 指代,然后我们就可以使用 \\1
来引用这两个字符了:
str_view(fruit, "(..)\\1", match = TRUE)
#> [4] │ b<anan>a
#> [20] │ <coco>nut
#> [22] │ <cucu>mber
#> [41] │ <juju>be
#> [56] │ <papa>ya
#> [73] │ s<alal> berry
str_view("ananananan", "(..)\\1\\1", match = TRUE)
#> [1] │ <ananan>anan
str_view(fruit, "(.).\\1.\\1", match = TRUE)
#> [4] │ b<anana>
#> [56] │ p<apaya>
再例如 (.)(.)
会匹配到两个连着的字符并把它们分别使用 1 和 2 指代,然后就可以使用 \\1
和 \\2
来引用这两个字符了,(.)(.)\\2\\1
可以匹配类似 abba
这样的字符串:
str_view(fruit, "(.)(.)\\2\\1", match = TRUE)
#> [5] │ bell p<eppe>r
#> [17] │ chili p<eppe>r
str_view("abbaabba", "(.)(.)\\2\\1", match = TRUE)
#> [1] │ <abba><abba>
str_view("abbaabba", "(.)(.)\\2\\1$", match = TRUE)
#> [1] │ abba<abba>
3.7 汉字的正则表达式
由于正则表达式里面可以使用 Unicode 编码,而汉字的 Unicode 范围是 \u4e00-\u9fa5
,所以可以用下面的正则表达式匹配字符串中的汉字:
ggwordcloud::love_words -> love_words
str_view(love_words$word, "[\u4e00-\u9fa5]+")
# 龥: yù
str_view(love_words$word, "[一-龥]+")
3.8 在函数中使用
3.8.1 str_detect():检测是否含有符合正则条件的字符串
x <- c("apple", "banana", "pear")
str_detect(x, "e")
#> [1] TRUE FALSE TRUE
统计 words (一个单词向量)里面有多少个单词以 t 开头:
sum(str_detect(words, "^t"))
#> [1] 65
包含元音的单词比例:
mean(str_detect(words, "[aeiou]$"))
#> [1] 0.2765306
检测汉字:
c("1千克", "1 公斤", "1 kg") -> x
str_detect(x, "[一-龥]+")
#> [1] TRUE TRUE FALSE
下面两个具有相同的效用:
!str_detect(words, "[aeiou]")
str_detect(words, "^[^aeiou]+$")
identical(!str_detect(words, "[aeiou]"),
str_detect(words, "^[^aeiou]+$"))
如何理解 str_detect(words, "^[^aeiou]+$")
:
c(words[!str_detect(words, "[aeiou]")], "abc") -> y
str_view(y, "[aeiou]")
#> [7] │ <a>bc
# 匹配非元音
str_view(y, "[^aeiou]")
#> [1] │ <b><y>
#> [2] │ <d><r><y>
#> [3] │ <f><l><y>
#> [4] │ <m><r><s>
#> [5] │ <t><r><y>
#> [6] │ <w><h><y>
#> [7] │ a<b><c>
# 匹配所有的
str_view(y, "[^aeiou]+")
#> [1] │ <by>
#> [2] │ <dry>
#> [3] │ <fly>
#> [4] │ <mrs>
#> [5] │ <try>
#> [6] │ <why>
#> [7] │ a<bc>
# 匹配只含非元音的
str_view(y, "^[^aeiou]+$")
#> [1] │ <by>
#> [2] │ <dry>
#> [3] │ <fly>
#> [4] │ <mrs>
#> [5] │ <try>
#> [6] │ <why>
还可以使用 str_subset() 提取符合正则表达式的字符串:
words[str_detect(words, "x$")]
#> [1] "box" "sex" "six" "tax"
str_subset(words, "x$")
#> [1] "box" "sex" "six" "tax"
更多的时候我们需要处理的是数据框:
df <- tibble(
word = words,
i = seq_along(word)
)
df %>%
dplyr::filter(str_detect(word, "x$"))
#> # A tibble: 4 × 2
#> word i
#> <chr> <int>
#> 1 box 108
#> 2 sex 747
#> 3 six 772
#> 4 tax 841
3.8.2 str_count() 统计符合正则条件的字符串数量
x <- c("apple", "banana", "pear")
str_count(x, "a")
#> [1] 1 3 1
# 每个单词平均包含多少个元音:
mean(str_count(words, "[aeiou]"))
#> [1] 1.991837
# 和 mutate() 一起使用:
df %>%
mutate(
vowels = str_count(word, "[aeiou]"),
consonants = str_count(word, "[^aeiou]")
)
#> # A tibble: 980 × 4
#> word i vowels consonants
#> <chr> <int> <int> <int>
#> 1 a 1 1 0
#> 2 able 2 2 2
#> 3 about 3 3 2
#> 4 absolute 4 4 4
#> 5 accept 5 2 4
#> 6 account 6 3 4
#> 7 achieve 7 4 3
#> 8 across 8 2 4
#> 9 act 9 1 2
#> 10 active 10 3 3
#> # ℹ 970 more rows
需要注意,匹配是不放回的:
# 下面的结果是 2 而不是 3
str_count("abababa", "aba")
#> [1] 2
str_view_all("abababa", "aba")
#> [1] │ <aba>b<aba>
3.8.3 str_extract():使用正则表达式提取
sentences
向量是一个句子向量:
head(sentences)
#> [1] "The birch canoe slid on the smooth planks."
#> [2] "Glue the sheet to the dark blue background."
#> [3] "It's easy to tell the depth of a well."
#> [4] "These days a chicken leg is a rare dish."
#> [5] "Rice is often served in round bowls."
#> [6] "The juice of lemons makes fine punch."
使用下面的方法可以找到所有包含颜色的句子:
colours <- c("red", "orange", "yellow", "green", "blue", "purple")
colour_match <- str_c(colours, collapse = "|")
colour_match
#> [1] "red|orange|yellow|green|blue|purple"
# 包含颜色的句子:
has_colour <- str_subset(sentences, colour_match)
# 这些句子包含的颜色分别是:
str_extract(has_colour, colour_match)
str_extract_all(has_colour, colour_match)
str_extract_all(has_colour, colour_match, simplify = T)
3.8.4 分组匹配
例如我们匹配 a 或者 the 之后的名字,我们可以这样:
noun <- "(a|the) ([^ ]+)"
has_noun <- sentences %>%
str_subset(noun) %>%
head(10)
has_noun %>%
str_extract(noun)
#> [1] "the smooth" "the sheet" "the depth" "a chicken" "the parked"
#> [6] "the sun" "the huge" "the ball" "the woman" "a helps"
str_match() 会返回一个矩阵:
has_noun %>%
str_match(noun)
#> [,1] [,2] [,3]
#> [1,] "the smooth" "the" "smooth"
#> [2,] "the sheet" "the" "sheet"
#> [3,] "the depth" "the" "depth"
#> [4,] "a chicken" "a" "chicken"
#> [5,] "the parked" "the" "parked"
#> [6,] "the sun" "the" "sun"
#> [7,] "the huge" "the" "huge"
#> [8,] "the ball" "the" "ball"
#> [9,] "the woman" "the" "woman"
#> [10,] "a helps" "a" "helps"
如果我们要处理的是数据框我们可以使用 tidyr 包的 extract() 函数:
tibble(sentence = sentences) %>%
tidyr::extract(
sentence, c("article", "noun"), "(a|the) ([^ ]+)",
remove = FALSE
)
#> # A tibble: 720 × 3
#> sentence article noun
#> <chr> <chr> <chr>
#> 1 The birch canoe slid on the smooth planks. the smooth
#> 2 Glue the sheet to the dark blue background. the sheet
#> 3 It's easy to tell the depth of a well. the depth
#> 4 These days a chicken leg is a rare dish. a chicken
#> 5 Rice is often served in round bowls. <NA> <NA>
#> 6 The juice of lemons makes fine punch. <NA> <NA>
#> 7 The box was thrown beside the parked truck. the parked
#> 8 The hogs were fed chopped corn and garbage. <NA> <NA>
#> 9 Four hours of steady work faced us. <NA> <NA>
#> 10 A large size in stockings is hard to sell. <NA> <NA>
#> # ℹ 710 more rows
例如我们之前爬取过这样的一个数据:
library(V8)
library(tidyverse)
library(rvest)
# url <- "http://stockdata.stock.hexun.com/gszl/data/jsondata/jbgk.ashx?count=5000&titType=null&page=1&callback=hxbase_json15"
# download.file(url, 'temp.json')
# cat("\n", file = 'temp.json', append = T)
readLines('temp.json', encoding = "GB2312") %>%
iconv("GBK", "UTF-8") %>%
stringr::str_replace("hxbase_json15\\(", "var data=") %>%
stringr::str_replace("\\}\\]\\}\\)", "}]};") -> text
ct <- v8()
ct$eval(text)
ct$get("data") -> js
js$list %>%
as_tibble() %>%
type_convert() -> stkdf
stkdf
#> # A tibble: 4,716 × 18
#> Number StockNameLink Stockname Pricelimit lootchips shareholders
#> <dbl> <chr> <chr> <chr> <chr> <chr>
#> 1 1 s600519.shtml 贵州茅台(600519) 12.56 12.56 18588.59
#> 2 2 s601398.shtml 工商银行(601398) 3564.06 2696.12 11701.17
#> 3 3 s601288.shtml 农业银行(601288) 3499.83 2992.85 8529.61
#> 4 4 s601857.shtml 中国石油(601857) 1830.21 1619.22 8290.41
#> 5 5 s601988.shtml 中国银行(601988) 2943.88 2107.66 6428.35
#> 6 6 s601628.shtml 中国人寿(601628) 282.65 208.24 5943.04
#> 7 7 s600036.shtml 招商银行(600036) 252.20 206.29 5815.30
#> 8 8 s000858.shtml 五粮液(000858) 38.82 38.81 5516.31
#> 9 9 s600900.shtml 长江电力(600900) 227.42 227.42 5005.48
#> 10 10 s601088.shtml 中国神华(601088) 198.69 164.91 4919.28
#> # ℹ 4,706 more rows
#> # ℹ 12 more variables: Institutional <chr>, Iratio <chr>, deviation <chr>,
#> # maincost <chr>, district <chr>, Cprice <chr>, Stockoverview <chr>,
#> # Addoptional <chr>, hyLink <chr>, gnLink <chr>, dyLink <chr>,
#> # StockLink <chr>
stkdf 的 Stockname 是有公司简称和股票代码组成的,我们可以使用 extract() 将其拆分:
stkdf %>%
select(Stockname) %>%
tidyr::extract(
Stockname, c("name", "code"), "([\u4e00-\u9fa5]+)\\((\\d{6})\\)",
remove = FALSE
)
#> # A tibble: 4,716 × 3
#> Stockname name code
#> <chr> <chr> <chr>
#> 1 贵州茅台(600519) 贵州茅台 600519
#> 2 工商银行(601398) 工商银行 601398
#> 3 农业银行(601288) 农业银行 601288
#> 4 中国石油(601857) 中国石油 601857
#> 5 中国银行(601988) 中国银行 601988
#> 6 中国人寿(601628) 中国人寿 601628
#> 7 招商银行(600036) 招商银行 600036
#> 8 五粮液(000858) 五粮液 000858
#> 9 长江电力(600900) 长江电力 600900
#> 10 中国神华(601088) 中国神华 601088
#> # ℹ 4,706 more rows
3.8.5 替换匹配
str_replace() 函数上面的代码中我们刚刚用过。
例如把单词中的元音替换成 -
:
x <- c("apple", "pear", "banana")
str_replace(x, "[aeiou]", "-")
#> [1] "-pple" "p-ar" "b-nana"
str_replace_all(x, "[aeiou]", "-")
#> [1] "-ppl-" "p--r" "b-n-n-"
也可以一对一替换:
x <- c("1 house", "2 cars", "3 people")
str_replace_all(x, c("1" = "one", "2" = "two", "3" = "three"))
#> [1] "one house" "two cars" "three people"
实际上 str_replace() 的 replacement 参数可以使用反向引用,例如反转前三个单词的顺序:
head(sentences)
#> [1] "The birch canoe slid on the smooth planks."
#> [2] "Glue the sheet to the dark blue background."
#> [3] "It's easy to tell the depth of a well."
#> [4] "These days a chicken leg is a rare dish."
#> [5] "Rice is often served in round bowls."
#> [6] "The juice of lemons makes fine punch."
head(sentences) %>%
str_replace("([^ ]+) ([^ ]+) ([^ ]+)", "\\3 \\2 \\1")
#> [1] "canoe birch The slid on the smooth planks."
#> [2] "sheet the Glue to the dark blue background."
#> [3] "to easy It's tell the depth of a well."
#> [4] "a days These chicken leg is a rare dish."
#> [5] "often is Rice served in round bowls."
#> [6] "of juice The lemons makes fine punch."
我之前遇到这样一个问题,我需要把 highcharter 绘制得到图表导出成 JS 脚本,但是 export_hc() 导出的不太符合需要(需要类似这样的:https://www.highcharts.com.cn/demo/highcharts/line-basic),于是我就设计了下面的代码:
library(highcharter)
library(tidyverse)
library(jsonlite)
highcharts_demo() -> hc
hc$x$hc_opts %>%
toJSON(pretty = T, auto_unbox = T) %>%
str_replace_all('"(\\w+)":', "\\1:") %>%
paste0("Highcharts.chart('container', ", ., ");") %>%
writeLines("temp.js")
3.8.6 str_split()
:字符串拆分
例如把句子拆分成单词:
sentences %>%
head(5) %>%
str_split(" ")
#> [[1]]
#> [1] "The" "birch" "canoe" "slid" "on" "the" "smooth"
#> [8] "planks."
#>
#> [[2]]
#> [1] "Glue" "the" "sheet" "to" "the"
#> [6] "dark" "blue" "background."
#>
#> [[3]]
#> [1] "It's" "easy" "to" "tell" "the" "depth" "of" "a" "well."
#>
#> [[4]]
#> [1] "These" "days" "a" "chicken" "leg" "is" "a"
#> [8] "rare" "dish."
#>
#> [[5]]
#> [1] "Rice" "is" "often" "served" "in" "round" "bowls."
sentences %>%
head(5) %>%
str_split(" ", simplify = T)
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#> [1,] "The" "birch" "canoe" "slid" "on" "the" "smooth" "planks."
#> [2,] "Glue" "the" "sheet" "to" "the" "dark" "blue" "background."
#> [3,] "It's" "easy" "to" "tell" "the" "depth" "of" "a"
#> [4,] "These" "days" "a" "chicken" "leg" "is" "a" "rare"
#> [5,] "Rice" "is" "often" "served" "in" "round" "bowls." ""
#> [,9]
#> [1,] ""
#> [2,] ""
#> [3,] "well."
#> [4,] "dish."
#> [5,] ""
# 还可以使用 n 参数指定拆分的数量
sentences %>%
head(5) %>%
str_split(" ", simplify = T, n = 2)
#> [,1] [,2]
#> [1,] "The" "birch canoe slid on the smooth planks."
#> [2,] "Glue" "the sheet to the dark blue background."
#> [3,] "It's" "easy to tell the depth of a well."
#> [4,] "These" "days a chicken leg is a rare dish."
#> [5,] "Rice" "is often served in round bowls."
也可以使用 boundary()
函数设定如何拆分:
# 按字符拆分
sentences %>%
head(5) %>%
str_split(boundary("character"), simplify = T)
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14]
#> [1,] "T" "h" "e" " " "b" "i" "r" "c" "h" " " "c" "a" "n" "o"
#> [2,] "G" "l" "u" "e" " " "t" "h" "e" " " "s" "h" "e" "e" "t"
#> [3,] "I" "t" "'" "s" " " "e" "a" "s" "y" " " "t" "o" " " "t"
#> [4,] "T" "h" "e" "s" "e" " " "d" "a" "y" "s" " " "a" " " "c"
#> [5,] "R" "i" "c" "e" " " "i" "s" " " "o" "f" "t" "e" "n" " "
#> [,15] [,16] [,17] [,18] [,19] [,20] [,21] [,22] [,23] [,24] [,25] [,26]
#> [1,] "e" " " "s" "l" "i" "d" " " "o" "n" " " "t" "h"
#> [2,] " " "t" "o" " " "t" "h" "e" " " "d" "a" "r" "k"
#> [3,] "e" "l" "l" " " "t" "h" "e" " " "d" "e" "p" "t"
#> [4,] "h" "i" "c" "k" "e" "n" " " "l" "e" "g" " " "i"
#> [5,] "s" "e" "r" "v" "e" "d" " " "i" "n" " " "r" "o"
#> [,27] [,28] [,29] [,30] [,31] [,32] [,33] [,34] [,35] [,36] [,37] [,38]
#> [1,] "e" " " "s" "m" "o" "o" "t" "h" " " "p" "l" "a"
#> [2,] " " "b" "l" "u" "e" " " "b" "a" "c" "k" "g" "r"
#> [3,] "h" " " "o" "f" " " "a" " " "w" "e" "l" "l" "."
#> [4,] "s" " " "a" " " "r" "a" "r" "e" " " "d" "i" "s"
#> [5,] "u" "n" "d" " " "b" "o" "w" "l" "s" "." "" ""
#> [,39] [,40] [,41] [,42] [,43]
#> [1,] "n" "k" "s" "." ""
#> [2,] "o" "u" "n" "d" "."
#> [3,] "" "" "" "" ""
#> [4,] "h" "." "" "" ""
#> [5,] "" "" "" "" ""
# 按单词拆分
sentences %>%
head(5) %>%
str_split(boundary("word"), simplify = T)
#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#> [1,] "The" "birch" "canoe" "slid" "on" "the" "smooth" "planks"
#> [2,] "Glue" "the" "sheet" "to" "the" "dark" "blue" "background"
#> [3,] "It's" "easy" "to" "tell" "the" "depth" "of" "a"
#> [4,] "These" "days" "a" "chicken" "leg" "is" "a" "rare"
#> [5,] "Rice" "is" "often" "served" "in" "round" "bowls" ""
#> [,9]
#> [1,] ""
#> [2,] ""
#> [3,] "well"
#> [4,] "dish"
#> [5,] ""
# 按句子拆分
sentences %>%
head(5) %>%
str_c(collapse = " ") %>%
str_split(boundary("sentence")) %>%
`[[`(1)
#> [1] "The birch canoe slid on the smooth planks. "
#> [2] "Glue the sheet to the dark blue background. "
#> [3] "It's easy to tell the depth of a well. "
#> [4] "These days a chicken leg is a rare dish. "
#> [5] "Rice is often served in round bowls."
# 按可断行拆分
sentences %>%
head(5) %>%
str_c(collapse = " ") %>%
str_split(boundary("line_break"))
#> [[1]]
#> [1] "The " "birch " "canoe " "slid " "on "
#> [6] "the " "smooth " "planks. " "Glue " "the "
#> [11] "sheet " "to " "the " "dark " "blue "
#> [16] "background. " "It's " "easy " "to " "tell "
#> [21] "the " "depth " "of " "a " "well. "
#> [26] "These " "days " "a " "chicken " "leg "
#> [31] "is " "a " "rare " "dish. " "Rice "
#> [36] "is " "often " "served " "in " "round "
#> [41] "bowls."
3.8.7 str_locate():匹配位置
str_locate() 和 str_locate_all() 函数可以返回匹配成功的位置:
str_view(head(fruit, 5), "a")
#> [1] │ <a>pple
#> [2] │ <a>pricot
#> [3] │ <a>voc<a>do
#> [4] │ b<a>n<a>n<a>
str_locate(head(fruit, 5), "a")
#> start end
#> [1,] 1 1
#> [2,] 1 1
#> [3,] 1 1
#> [4,] 2 2
#> [5,] NA NA
3.9 其他类型的匹配
实际上正则表达式是使用 regex() 定义的:
str_view(head(fruit, 5), "nana")
#> [4] │ ba<nana>
# 实际上是下面代码的简写
str_view(head(fruit, 5), regex("nana"))
#> [4] │ ba<nana>
使用 regex() 可以进行更多样的匹配,例如忽略大小写:
bananas <- c("banana", "Banana", "BANANA")
str_view(bananas, "banana")
#> [1] │ <banana>
str_view(bananas, regex("banana", ignore_case = TRUE))
#> [1] │ <banana>
#> [2] │ <Banana>
#> [3] │ <BANANA>
再例如多行匹配:
x <- "Line 1\nLine 2\nLine 3"
str_extract_all(x, "^Line")
#> [[1]]
#> [1] "Line"
str_extract_all(x, regex("^Line", multiline = TRUE))
#> [[1]]
#> [1] "Line" "Line" "Line"
当我们仅仅需要查找字符串的使用使用 fixed() 进行查找效率更高:
microbenchmark::microbenchmark(
fixed = str_detect(sentences, fixed("the")),
regex = str_detect(sentences, "the"),
times = 20
)
#> Unit: microseconds
#> expr min lq mean median uq max neval
#> fixed 74.781 80.516 94.40215 82.4820 87.7710 289.194 20
#> regex 237.182 241.234 255.43170 244.5445 256.3005 381.417 20
3.10 正则表达式的其他用途
apropos():搜索全局环境中可用的所有对象:
apropos("replace")
#> [1] "%+replace%" "replace" "replace_na" "setReplaceMethod"
#> [5] "str_replace" "str_replace_all" "str_replace_na" "theme_replace"
dir():列出目录中的所有文件:
head(dir(pattern = "\\.Rmd$"))
文件处理更建议大家使用 fs 包:
fs::dir_ls(regexp = "[.]Rmd$")
例如附件中有三个文件:mpg1.csv mpg2.csv mpg3.csv,使用下面的方法就可以合并了
library(fs)
dir_ls(regexp = "mpg\\d.csv") %>%
lapply(read_csv) %>%
bind_rows()
#> # A tibble: 234 × 11
#> manufacturer model displ year cyl trans drv cty hwy fl class
#> <chr> <chr> <dbl> <dbl> <dbl> <chr> <chr> <dbl> <dbl> <chr> <chr>
#> 1 audi a4 1.8 1999 4 auto… f 18 29 p comp…
#> 2 audi a4 1.8 1999 4 manu… f 21 29 p comp…
#> 3 audi a4 2 2008 4 manu… f 20 31 p comp…
#> 4 audi a4 2 2008 4 auto… f 21 30 p comp…
#> 5 audi a4 2.8 1999 6 auto… f 16 26 p comp…
#> 6 audi a4 2.8 1999 6 manu… f 18 26 p comp…
#> 7 audi a4 3.1 2008 6 auto… f 18 27 p comp…
#> 8 audi a4 quattro 1.8 1999 4 manu… 4 18 26 p comp…
#> 9 audi a4 quattro 1.8 1999 4 auto… 4 16 25 p comp…
#> 10 audi a4 quattro 2 2008 4 manu… 4 20 28 p comp…
#> # ℹ 224 more rows
4 案例:网页数据爬取中的字符串数据处理和正则表达式的使用
这个案例来源于:Holy ifelse() statements Batman!:https://austinwehrwein.com/data-visualization/plotting-batman-villains-ggraph/ 不过作者的代码比较老了,我重写了爬取代码,另外作者处理数据过程有点错误,所以我最后画的图和作者的不一样。
首先需要从 A Visual Guide to All 37 Villains in the Batman TV Series:http://mentalfloss.com/article/60213/visual-guide-all-37-villains-batman-tv-series 爬取每个反派出现的季数和集数的数据,为了方便大家复现这个案例,我已经把这个网页下载保存成了 visual-guide-all-37-villains-batman-tv-series.html
文件。
library(tidyverse)
library(rvest)
# 为了方便大家之后的运行,我把这个网页下载下来了:
# download.file('http://mentalfloss.com/article/60213/visual-guide-all-37-villains-batman-tv-series',
# 'visual-guide-all-37-villains-batman-tv-series.html')
# 反派的名字
read_html('visual-guide-all-37-villains-batman-tv-series.html') %>%
html_nodes(css = "#article-1 > div > div.article-body-section.top-leaderboard-limit > div.article-body-content-container > div.article-body-content > div.article-body > h4") %>%
html_text() %>%
as_tibble() -> namedf
read_html('visual-guide-all-37-villains-batman-tv-series.html') %>%
html_nodes(css = "strong i") %>%
html_text() %>%
as_tibble() -> seasondf
bind_cols(namedf, seasondf) %>%
set_names(c("name", "v2")) %>%
mutate(v2 = str_remove_all(v2, "EPISODES "),
v2 = str_remove_all(v2, "EPISODE ")) %>%
separate_rows("v2", sep = "SEASON ") %>%
dplyr::filter(v2 != "") %>%
separate(v2, into = c("season", "episode"),
sep = " \\(") %>%
mutate(episode = str_remove_all(episode, ","),
episode = str_remove_all(episode, "\\)")) %>%
tidytext::unnest_tokens(episode, episode,
token = stringr::str_split,
pattern = " ") %>%
dplyr::filter(episode != "") %>%
mutate(name = str_remove_all(name, "\\d+\\. ")) %>%
type_convert() %>%
mutate(name = str_replace_all(name, " \\(", "\n\\(")) %>%
mutate(to = str_c(season, episode)) %>%
rename(from = name) %>%
select(-episode) %>%
select(to, from, season) -> df
df
#> # A tibble: 141 × 3
#> to from season
#> <chr> <chr> <dbl>
#> 1 11 "THE RIDDLER\n(FRANK GORSHIN)" 1
#> 2 12 "THE RIDDLER\n(FRANK GORSHIN)" 1
#> 3 111 "THE RIDDLER\n(FRANK GORSHIN)" 1
#> 4 112 "THE RIDDLER\n(FRANK GORSHIN)" 1
#> 5 123 "THE RIDDLER\n(FRANK GORSHIN)" 1
#> 6 124 "THE RIDDLER\n(FRANK GORSHIN)" 1
#> 7 131 "THE RIDDLER\n(FRANK GORSHIN)" 1
#> 8 132 "THE RIDDLER\n(FRANK GORSHIN)" 1
#> 9 32 "THE RIDDLER\n(FRANK GORSHIN)" 3
#> 10 245 "THE RIDDLER\n(JOHN ASTIN)" 2
#> # ℹ 131 more rows
然后可以绘制一副网络图展示各个反派出现的集数:
library(ggraph)
library(igraph)
graph <- graph_from_data_frame(as.data.frame(df))
V(graph)$degree <- degree(graph)
n.names <- unique(df$to)
# Fruchterman-Reingold 布局
ggraph(graph, layout = 'fr') +
geom_edge_link(aes(
colour = factor(season)
)) +
geom_node_point(aes(
size = ifelse(V(graph)$name %in% n.names, 1, degree)),
colour = ifelse(V(graph)$name %in% n.names, '#363636', '#ffffff'),
show.legend = F) +
geom_node_text(aes(
label = name),
color = ifelse(V(graph)$name %in% n.names, 'grey', 'white'),
size = ifelse(V(graph)$name %in% n.names, 1.75, 2.5),
repel = T,
check_overlap = T) +
scale_edge_color_brewer('Season',
palette = 'Dark2') +
theme_graph(background = 'grey20',
text_colour = 'white',
base_family = cnfont,
base_size = 10,
subtitle_size = 10,
title_size = 22) +
theme(legend.position = 'bottom') +
labs(
title = '蝙蝠侠中的反派',
subtitle = '节点表示蝙蝠侠电视剧的1——3季中的37个反派,尾端的数字表示出现的季和集数。',
caption = '数据来源: A Visual Guide to All 37 Villains in the Batman TV Series | Mental Floss\n<http://mentalfloss.com/article/60213/visual-guide-all-37-villains-batman-tv-series>')
这样就绘制出了这幅炫酷的网络图。
5 案例:匹配手机号
Github 上有个项目搜集了匹配中国大陆手机号码的正则表达式:VincentSit/ChinaMobilePhoneNumberRegex
例如从下面的数字中匹配可能是手机号的:
c("123", "31231", "18202892320", "12202892320", "17061331428") %>%
str_view_all("^(?:\\+?86)?1(?:3\\d{3}|5[^4\\D]\\d{2}|8\\d{3}|7(?:[0-35-9]\\d{2}|4(?:0\\d|1[0-2]|9\\d))|9[0-35-9]\\d{2}|6[2567]\\d{2}|4(?:(?:10|4[01])\\d{3}|[68]\\d{4}|[579]\\d{2}))\\d{6}$")
6 常用正则表达式
这里搜集了各种常用的正则表达式:common-regex 感兴趣的小伙伴可以参考一下。
最后我们再用一张图总结常用的正则表达式:
knitr::include_url("RegExCheatsheet.pdf", height = "600px")
更多内容,欢迎参加明晚 8 点的课程学习!
直播信息
为了让大家更好的理解上面的内容,欢迎各位培训班会员参加明晚 8 点的直播课 「使用 R 语言处理字符串」
直播地址:腾讯会议(需要报名 RStata 培训班参加) 讲义材料:需要报名 RStata 培训班,详情可阅读:一起来学习 R 语言和 Stata 啦!学习过程中遇到的问题也可以随时提问!
更多关于 RStata 会员的更多信息可添加微信号 r_stata 咨询:
附件下载(点击文末的阅读原文即可跳转):https://rstata.duanshu.com/#/brief/course/57571646805e4786b622867cfd5ad1d9