首页 >> 大全

NLP

2024-01-04 大全 29 作者:考证青年

深度学习-自然语言处理:迁移学习(拿已经训练好的模型来使用)【GLUE数据集、预训练模型(BERT、GPT、-XL、XLNet、T5)、微调、微调脚本】 三、NLP中的常用预训练模型 四、加载和使用预训练模型 5、手动下载Bert的预训练模型并加载训练 五、迁移学习实践【 2.5.1版本(2020-2-25)】 2、“微调脚本微调后模型”的使用步骤【上传模型、加载模型】 3、“微调脚本微调后模型”的迁移学习方式

一、迁移学习概述

预训练模型( model)

微调(Fine-)

微调脚本(Fine- )

两种迁移方式

二、NLP中的标准数据集

GLUE数据集合是由纽约大学, 华盛顿大学, 联合推出, 涵盖不同NLP任务类型, 截止至2020年1月其中包括11个子任务数据集, 成为衡量NLP研究发展的衡量标准。

GLUE数据集合包含以下数据集

1、GLUE数据集合的下载方式

GLUE数据集合下载脚本【官方提供】

""" Script for downloading all GLUE data.
Example usage:python download_glue_data.py --data_dir data --tasks all
Note: for legal reasons, we are unable to host MRPC.
You can either use the version hosted by the SentEval team, which is already tokenized,
or you can download the original data from:
https://download.microsoft.com/download/D/4/6/D46FF87A-F6B9-4252-AA8B-3604ED519838/MSRParaphraseCorpus.msi  # noqa
and extract the data from it manually.
For Windows users, you can run the .msi file. For Mac and Linux users, consider an external library
such as 'cabextract' (see below for an example). You should then rename and place specific files in
a folder (see below for an example).
mkdir MRPC
cabextract MSRParaphraseCorpus.msi -d MRPC
cat MRPC/_2DEC3DBE877E4DB192D17C0256E90F1D | tr -d $'\r' > MRPC/msr_paraphrase_train.txt
cat MRPC/_D7B391F9EAFF4B1B8BCE8F21B20B1B61 | tr -d $'\r' > MRPC/msr_paraphrase_test.txt
rm MRPC/_*
rm MSRParaphraseCorpus.msi
"""import os
import sys
import shutil
import argparse
import tempfile
import urllib.request
import zipfileTASKS = ["CoLA", "SST", "MRPC", "QQP", "STS", "MNLI", "SNLI", "QNLI", "RTE", "WNLI", "diagnostic"]
TASK2PATH = {"CoLA": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FCoLA.zip?alt=media&token=46d5e637-3411-4188-bc44-5809b5bfb5f4",  # noqa"SST": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8",  # noqa"MRPC": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc",  # noqa"QQP": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FQQP-clean.zip?alt=media&token=11a647cb-ecd3-49c9-9d31-79f8ca8fe277",  # noqa"STS": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSTS-B.zip?alt=media&token=bddb94a7-8706-4e0d-a694-1109e12273b5",  # noqa"MNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FMNLI.zip?alt=media&token=50329ea1-e339-40e2-809c-10c40afff3ce",  # noqa"SNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSNLI.zip?alt=media&token=4afcfbb2-ff0c-4b2d-a09a-dbf07926f4df",  # noqa"QNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FQNLIv2.zip?alt=media&token=6fdcf570-0fc5-4631-8456-9505272d1601",  # noqa"RTE": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb",  # noqa"WNLI": "https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FWNLI.zip?alt=media&token=068ad0a0-ded7-4bd7-99a5-5e00222e0faf",  # noqa"diagnostic": ["https://storage.googleapis.com/mtl-sentence-representations.appspot.com/tsvsWithoutLabels%2FAX.tsv?GoogleAccessId=firebase-adminsdk-0khhl@mtl-sentence-representations.iam.gserviceaccount.com&Expires=2498860800&Signature=DuQ2CSPt2Yfre0C%2BiISrVYrIFaZH1Lc7hBVZDD4ZyR7fZYOMNOUGpi8QxBmTNOrNPjR3z1cggo7WXFfrgECP6FBJSsURv8Ybrue8Ypt%2FTPxbuJ0Xc2FhDi%2BarnecCBFO77RSbfuz%2Bs95hRrYhTnByqu3U%2FYZPaj3tZt5QdfpH2IUROY8LiBXoXS46LE%2FgOQc%2FKN%2BA9SoscRDYsnxHfG0IjXGwHN%2Bf88q6hOmAxeNPx6moDulUF6XMUAaXCSFU%2BnRO2RDL9CapWxj%2BDl7syNyHhB7987hZ80B%2FwFkQ3MEs8auvt5XW1%2Bd4aCU7ytgM69r8JDCwibfhZxpaa4gd50QXQ%3D%3D",  # noqa"https://www.dropbox.com/s/ju7d95ifb072q9f/diagnostic-full.tsv?dl=1",],
}MRPC_TRAIN = "https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt"
MRPC_TEST = "https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt"def download_and_extract(task, data_dir):print("Downloading and extracting %s..." % task)data_file = "%s.zip" % taskurllib.request.urlretrieve(TASK2PATH[task], data_file)with zipfile.ZipFile(data_file) as zip_ref:zip_ref.extractall(data_dir)os.remove(data_file)print("\tCompleted!")def format_mrpc(data_dir, path_to_data):print("Processing MRPC...")mrpc_dir = os.path.join(data_dir, "MRPC")if not os.path.isdir(mrpc_dir):os.mkdir(mrpc_dir)if path_to_data:mrpc_train_file = os.path.join(path_to_data, "msr_paraphrase_train.txt")mrpc_test_file = os.path.join(path_to_data, "msr_paraphrase_test.txt")else:print("Local MRPC data not specified, downloading data from %s" % MRPC_TRAIN)mrpc_train_file = os.path.join(mrpc_dir, "msr_paraphrase_train.txt")mrpc_test_file = os.path.join(mrpc_dir, "msr_paraphrase_test.txt")urllib.request.urlretrieve(MRPC_TRAIN, mrpc_train_file)urllib.request.urlretrieve(MRPC_TEST, mrpc_test_file)assert os.path.isfile(mrpc_train_file), "Train data not found at %s" % mrpc_train_fileassert os.path.isfile(mrpc_test_file), "Test data not found at %s" % mrpc_test_fileurllib.request.urlretrieve(TASK2PATH["MRPC"], os.path.join(mrpc_dir, "dev_ids.tsv"))dev_ids = []with open(os.path.join(mrpc_dir, "dev_ids.tsv"), encoding="utf8") as ids_fh:for row in ids_fh:dev_ids.append(row.strip().split("\t"))with open(mrpc_train_file, encoding="utf8") as data_fh, open(os.path.join(mrpc_dir, "train.tsv"), "w", encoding="utf8") as train_fh, open(os.path.join(mrpc_dir, "dev.tsv"), "w", encoding="utf8") as dev_fh:header = data_fh.readline()train_fh.write(header)dev_fh.write(header)for row in data_fh:label, id1, id2, s1, s2 = row.strip().split("\t")if [id1, id2] in dev_ids:dev_fh.write("%s\t%s\t%s\t%s\t%s\n" % (label, id1, id2, s1, s2))else:train_fh.write("%s\t%s\t%s\t%s\t%s\n" % (label, id1, id2, s1, s2))with open(mrpc_test_file, encoding="utf8") as data_fh, open(os.path.join(mrpc_dir, "test.tsv"), "w", encoding="utf8") as test_fh:header = data_fh.readline()test_fh.write("index\t#1 ID\t#2 ID\t#1 String\t#2 String\n")for idx, row in enumerate(data_fh):label, id1, id2, s1, s2 = row.strip().split("\t")test_fh.write("%d\t%s\t%s\t%s\t%s\n" % (idx, id1, id2, s1, s2))print("\tCompleted!")def download_diagnostic(data_dir):print("Downloading and extracting diagnostic data...")if not os.path.isdir(os.path.join(data_dir, "MNLI")):os.mkdir(os.path.join(data_dir, "MNLI"))data_file = os.path.join(data_dir, "MNLI", "diagnostic.tsv")urllib.request.urlretrieve(TASK2PATH["diagnostic"][0], data_file)data_file = os.path.join(data_dir, "MNLI", "diagnostic-full.tsv")urllib.request.urlretrieve(TASK2PATH["diagnostic"][1], data_file)print("\tCompleted!")returndef get_tasks(task_names):task_names = task_names.split(",")if "all" in task_names:tasks = TASKSelse:tasks = []for task_name in task_names:assert task_name in TASKS, "Task %s not found!" % task_nametasks.append(task_name)if "MNLI" in tasks and "diagnostic" not in tasks:tasks.append("diagnostic")return tasksdef main(arguments):parser = argparse.ArgumentParser()parser.add_argument("--data_dir", help="directory to save data to", type=str, default="glue_data")parser.add_argument("--tasks",help="tasks to download data for as a comma separated string",type=str,default="all",)parser.add_argument("--path_to_mrpc",help="path to directory containing extracted MRPC data, msr_paraphrase_train.txt and ""msr_paraphrase_text.txt",type=str,default="",)args = parser.parse_args(arguments)if not os.path.isdir(args.data_dir):os.mkdir(args.data_dir)tasks = get_tasks(args.tasks)for task in tasks:if task == "MRPC":format_mrpc(args.data_dir, args.path_to_mrpc)elif task == "diagnostic":download_diagnostic(args.data_dir)else:download_and_extract(task, args.data_dir)if __name__ == "__main__":sys.exit(main(sys.argv[1:]))

打印结果:

Downloading and extracting CoLA...Completed!
Downloading and extracting SST...Completed!
Processing MRPC...
Local MRPC data not specified, downloading data from https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txtCompleted!
Downloading and extracting QQP...Completed!
Downloading and extracting STS...Completed!
Downloading and extracting MNLI...Completed!
Downloading and extracting SNLI...Completed!
Downloading and extracting QNLI...Completed!
Downloading and extracting RTE...Completed!
Downloading and extracting WNLI...Completed!
Downloading and extracting diagnostic...Completed!

2、GLUE子数据集的样式及其任务类型 2.1 CoLA数据集【判断句子语法是否正确】

CoLA数据集的任务类型:

CoLA数据集文件样式

- CoLA/- dev.tsv  - original/- test.tsv  - train.tsv

在使用中常用到的文件是train.tsv, dev.tsv, test.tsv, 分别代表训练集, 验证集和测试集. 其中train.tsv与dev.tsv数据样式相同, 都是带有标签的数据, 其中test.tsv是不带有标签的数据.

train.tsv数据样式:

...
gj04    1       She coughed herself awake as the leaf landed on her nose.
gj04    1       The worm wriggled onto the carpet.
gj04    1       The chocolate melted onto the carpet.
gj04    0   *   The ball wriggled itself loose.
gj04    1       Bill wriggled himself loose.
bc01    1       The sinking of the ship to collect the insurance was very devious.
bc01    1       The ship's sinking was very devious.
bc01    0   *   The ship's sinking to collect the insurance was very devious.
bc01    1       The testing of such drugs on oneself is too risky.
bc01    0   *   This drug's testing on oneself is too risky.
...

train.tsv中的数据内容共分为4列,

test.tsv数据样式:

index   sentence
0   Bill whistled past the house.
1   The car honked its way down the road.
2   Bill pushed Harry off the sofa.
3   the kittens yawned awake and played.
4   I demand that the more John eats, the more he pay.
5   If John eats more, keep your mouth shut tighter, OK?
6   His expectations are always lower than mine are.
7   The sooner you call, the more carefully I will word the letter.
8   The more timid he feels, the more people he interviews without asking questions of.
9   Once Janet left, Fred became a lot crazier.
...

test.tsv中的数据内容共分为2列,

2.2 SST-2数据集【情感分类】

SST-2数据集的任务类型:

SST-2数据集文件样式

- SST-2/- dev.tsv- original/- test.tsv- train.tsv

在使用中常用到的文件是train.tsv, dev.tsv, test.tsv, 分别代表训练集, 验证集和测试集. 其中train.tsv与dev.tsv数据样式相同, 都是带有标签的数据, 其中test.tsv是不带有标签的数据.

train.tsv数据样式:

sentence    label
hide new secretions from the parental units     0
contains no wit , only labored gags     0
that loves its characters and communicates something rather beautiful about human nature    1
remains utterly satisfied to remain the same throughout     0
on the worst revenge-of-the-nerds clichés the filmmakers could dredge up    0
that 's far too tragic to merit such superficial treatment  0
demonstrates that the director of such hollywood blockbusters as patriot games can still turn out a small , personal film with an emotional wallop .    1
of saucy    1
a depressed fifteen-year-old 's suicidal poetry     0
...

train.tsv中的数据内容共分为2列,

test.tsv数据样式:

index   sentence
0   uneasy mishmash of styles and genres .
1   this film 's relationship to actual tension is the same as what christmas-tree flocking in a spray can is to actual snow : a poor -- if durable -- imitation .
2   by the end of no such thing the audience , like beatrice , has a watchful affection for the monster .
3   director rob marshall went out gunning to make a great one .
4   lathan and diggs have considerable personal charm , and their screen rapport makes the old story seem new .
5   a well-made and often lovely depiction of the mysteries of friendship .
6   none of this violates the letter of behan 's book , but missing is its spirit , its ribald , full-throated humor .
7   although it bangs a very cliched drum at times , this crowd-pleaser 's fresh dialogue , energetic music , and good-natured spunk are often infectious .
8   it is not a mass-market entertainment but an uncompromising attempt by one artist to think about another .
9   this is junk food cinema at its greasiest .
...

test.tsv中的数据内容共分为2列,

2.3 MRPC数据集【判断每对句子是否具有相同含义】

MRPC数据集的任务类型:

MRPC数据集文件样式

- MRPC/- dev.tsv- test.tsv- train.tsv- dev_ids.tsv- msr_paraphrase_test.txt- msr_paraphrase_train.txt

在使用中常用到的文件是train.tsv, dev.tsv, test.tsv, 分别代表训练集, 验证集和测试集. 其中train.tsv与dev.tsv数据样式相同, 都是带有标签的数据, 其中test.tsv是不带有标签的数据.

train.tsv数据样式:

Quality #1 ID   #2 ID   #1 String   #2 String
1   702876  702977  Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence . Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .
0   2108705 2108831 Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion .   Yucaipa bought Dominick 's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998 .
1   1330381 1330521 They had published an advertisement on the Internet on June 10 , offering the cargo for sale , he added .   On June 10 , the ship 's owners had published an advertisement on the Internet , offering the explosives for sale .
0   3344667 3344648 Around 0335 GMT , Tab shares were up 19 cents , or 4.4 % , at A $ 4.56 , having earlier set a record high of A $ 4.57 . Tab shares jumped 20 cents , or 4.6 % , to set a record closing high at A $ 4.57 .
1   1236820 1236712 The stock rose $ 2.11 , or about 11 percent , to close Friday at $ 21.51 on the New York Stock Exchange .   PG & E Corp. shares jumped $ 1.63 or 8 percent to $ 21.03 on the New York Stock Exchange on Friday .
1   738533  737951  Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier .   With the scandal hanging over Stewart 's company , revenue the first quarter of the year dropped 15 percent from the same period a year earlier .
0   264589  264502  The Nasdaq had a weekly gain of 17.27 , or 1.2 percent , closing at 1,520.15 on Friday .    The tech-laced Nasdaq Composite .IXIC rallied 30.46 points , or 2.04 percent , to 1,520.15 .
1   579975  579810  The DVD-CCA then appealed to the state Supreme Court .  The DVD CCA appealed that decision to the U.S. Supreme Court .
...

train.tsv中的数据内容共分为5列,

test.tsv数据样式:

index   #1 ID   #2 ID   #1 String   #2 String
0   1089874 1089925 PCCW 's chief operating officer , Mike Butcher , and Alex Arena , the chief financial officer , will report directly to Mr So . Current Chief Operating Officer Mike Butcher and Group Chief Financial Officer Alex Arena will report to So .
1   3019446 3019327 The world 's two largest automakers said their U.S. sales declined more than predicted last month as a late summer sales frenzy caused more of an industry backlash than expected . Domestic sales at both GM and No. 2 Ford Motor Co. declined more than predicted as a late summer sales frenzy prompted a larger-than-expected industry backlash .
2   1945605 1945824 According to the federal Centers for Disease Control and Prevention ( news - web sites ) , there were 19 reported cases of measles in the United States in 2002 .   The Centers for Disease Control and Prevention said there were 19 reported cases of measles in the United States in 2002 .
3   1430402 1430329 A tropical storm rapidly developed in the Gulf of Mexico Sunday and was expected to hit somewhere along the Texas or Louisiana coasts by Monday night . A tropical storm rapidly developed in the Gulf of Mexico on Sunday and could have hurricane-force winds when it hits land somewhere along the Louisiana coast Monday night .
4   3354381 3354396 The company didn 't detail the costs of the replacement and repairs .   But company officials expect the costs of the replacement work to run into the millions of dollars .
5   1390995 1391183 The settling companies would also assign their possible claims against the underwriters to the investor plaintiffs , he added . Under the agreement , the settling companies will also assign their potential claims against the underwriters to the investors , he added .
6   2201401 2201285 Air Commodore Quaife said the Hornets remained on three-minute alert throughout the operation . Air Commodore John Quaife said the security operation was unprecedented .
7   2453843 2453998 A Washington County man may have the countys first human case of West Nile virus , the health department said Friday .  The countys first and only human case of West Nile this year was confirmed by health officials on Sept . 8 .
...

test.tsv中的数据内容共分为5列,

2.4 STS-B数据集【计算每对句子的含义相似程度】

STS-B数据集的任务类型:

STS-B数据集文件样式

- STS-B/- dev.tsv- test.tsv- train.tsv- LICENSE.txt- readme.txt- original/

在使用中常用到的文件是train.tsv, dev.tsv, test.tsv, 分别代表训练集, 验证集和测试集. 其中train.tsv与dev.tsv数据样式相同, 都是带有标签的数据, 其中test.tsv是不带有标签的数据.

train.tsv数据样式:

index   genre   filename    year    old_index   source1 source2 sentence1   sentence2   score
0   main-captions   MSRvid  2012test    0001    none    none    A plane is taking off.  An air plane is taking off. 5.000
1   main-captions   MSRvid  2012test    0004    none    none    A man is playing a large flute. A man is playing a flute.   3.800
2   main-captions   MSRvid  2012test    0005    none    none    A man is spreading shreded cheese on a pizza.   A man is spreading shredded cheese on an uncooked pizza.    3.800
3   main-captions   MSRvid  2012test    0006    none    none    Three men are playing chess.Two men are playing chess.  2.600
4   main-captions   MSRvid  2012test    0009    none    none    A man is playing the cello.A man seated is playing the cello.   4.250
5   main-captions   MSRvid  2012test    0011    none    none    Some men are fighting.  Two men are fighting.   4.250
6   main-captions   MSRvid  2012test    0012    none    none    A man is smoking.   A man is skating.   0.500
7   main-captions   MSRvid  2012test    0013    none    none    The man is playing the piano.   The man is playing the guitar.  1.600
8   main-captions   MSRvid  2012test    0014    none    none    A man is playing on a guitar and singing.   A woman is playing an acoustic guitar and singing.  2.200
9   main-captions   MSRvid  2012test    0016    none    none    A person is throwing a cat on to the ceiling.   A person throws a cat on the ceiling.   5.000
...

train.tsv中的数据内容共分为10列,

test.tsv数据样式:

index   genre   filename    year    old_index   source1 source2 sentence1   sentence2
0   main-captions   MSRvid  2012test    0024    none    none    A girl is styling her hair. A girl is brushing her hair.
1   main-captions   MSRvid  2012test    0033    none    none    A group of men play soccer on the beach.    A group of boys are playing soccer on the beach.
2   main-captions   MSRvid  2012test    0045    none    none    One woman is measuring another woman's ankle.   A woman measures another woman's ankle.
3   main-captions   MSRvid  2012test    0063    none    none    A man is cutting up a cucumber. A man is slicing a cucumber.
4   main-captions   MSRvid  2012test    0066    none    none    A man is playing a harp.    A man is playing a keyboard.
5   main-captions   MSRvid  2012test    0074    none    none    A woman is cutting onions.  A woman is cutting tofu.
6   main-captions   MSRvid  2012test    0076    none    none    A man is riding an electric bicycle.    A man is riding a bicycle.
7   main-captions   MSRvid  2012test    0082    none    none    A man is playing the drums. A man is playing the guitar.
8   main-captions   MSRvid  2012test    0092    none    none    A man is playing guitar.    A lady is playing the guitar.
9   main-captions   MSRvid  2012test    0095    none    none    A man is playing a guitar.  A man is playing a trumpet.
10  main-captions   MSRvid  2012test    0096    none    none    A man is playing a guitar.  A man is playing a trumpet.
...

test.tsv中的数据内容共分为9列, 含义与train.tsv前9列相同.

2.5 QQP数据集【判断每对问题是否是重复问题】

nlp算法工程师_nlp执行师证书的含金量_

QQP数据集的任务类型:

QQP数据集文件样式

- QQP/- dev.tsv- original/- test.tsv- train.tsv

在使用中常用到的文件是train.tsv, dev.tsv, test.tsv, 分别代表训练集, 验证集和测试集. 其中train.tsv与dev.tsv数据样式相同, 都是带有标签的数据, 其中test.tsv是不带有标签的数据.

train.tsv数据样式:

id  qid1    qid2    question1   question2   is_duplicate
133273  213221  213222  How is the life of a math student? Could you describe your own experiences?Which level of prepration is enough for the exam jlpt5?  0
402555  536040  536041  How do I control my horny emotions? How do you control your horniness?  1
360472  364011  490273  What causes stool color to change to yellow?    What can cause stool to come out as little balls?   0
150662  155721  7256    What can one do after MBBS? What do i do after my MBBS ?    1
183004  279958  279959  Where can I find a power outlet for my laptop at Melbourne Airport? Would a second airport in Sydney, Australia be needed if a high-speed rail link was created between Melbourne and Sydney?   0
119056  193387  193388  How not to feel guilty since I am Muslim and I'm conscious we won't have sex together?  I don't beleive I am bulimic, but I force throw up atleast once a day after I eat something and feel guilty. Should I tell somebody, and if so who? 0
356863  422862  96457   How is air traffic controlled?  How do you become an air traffic controller?0
106969  147570  787 What is the best self help book you have read? Why? How did it change your life?    What are the top self help books I should read? 1
...

train.tsv中的数据内容共分为6列,

test.tsv数据样式:

id  question1   question2
0   Would the idea of Trump and Putin in bed together scare you, given the geopolitical implications?   Do you think that if Donald Trump were elected President, he would be able to restore relations with Putin and Russia as he said he could, based on the rocky relationship Putin had with Obama and Bush?
1   What are the top ten Consumer-to-Consumer E-commerce online?    What are the top ten Consumer-to-Business E-commerce online?
2   Why don't people simply 'Google' instead of asking questions on Quora?  Why do people ask Quora questions instead of just searching google?
3   Is it safe to invest in social trade biz?   Is social trade geniune?
4   If the universe is expanding then does matter also expand?  If universe and space is expanding? Does that mean anything that occupies space is also expanding?
5   What is the plural of hypothesis?   What is the plural of thesis?
6   What is the application form you need for launching a company?  What is the application form you need for launching a company in Austria?
7   What is Big Theta? When should I use Big Theta as opposed to big O? Is O(Log n) close to O(n) or O(1)?
8   What are the health implications of accidentally eating a small quantity of aluminium foil?What are the implications of not eating vegetables?
...

test.tsv中的数据内容共分为3列, 第一列数据代表每条文本数据的索引; 第二列和第三列数据代表用于测试的问题句子对.

2.6 (MNLI/SNLI)数据集【判断每对句子的蕴含关系:蕴含、中性、矛盾】

(MNLI/SNLI)数据集的任务类型:

(MNLI/SNLI)数据集文件样式

- (MNLI/SNLI)/- dev_matched.tsv- dev_mismatched.tsv- original/- test_matched.tsv- test_mismatched.tsv- train.tsv

在使用中常用到的文件是train.tsv, .tsv, .tsv, .tsv, .tsv分别代表训练集, 与训练集一同采集的验证集, 与训练集不是一同采集验证集, 与训练集一同采集的测试集, 与训练集不是一同采集测试集. 其中train.tsv与.tsv和.tsv数据样式相同, 都是带有标签的数据, 其中.tsv与.tsv数据样式相同, 都是不带有标签的数据.

train.tsv数据样式:

index   promptID    pairID  genre   sentence1_binary_parse  sentence2_binary_parse  sentence1_parse sentence2_parse sentence1   sentence2   label1  gold_label
0   31193   31193n  government  ( ( Conceptually ( cream skimming ) ) ( ( has ( ( ( two ( basic dimensions ) ) - ) ( ( product and ) geography ) ) ) . ) )  ( ( ( Product and ) geography ) ( ( are ( what ( make ( cream ( skimming work ) ) ) ) ) . ) )   (ROOT (S (NP (JJ Conceptually) (NN cream) (NN skimming)) (VP (VBZ has) (NP (NP (CD two) (JJ basic) (NNS dimensions)) (: -) (NP (NN product) (CC and) (NN geography)))) (. .)))  (ROOT (S (NP (NN Product) (CC and) (NN geography)) (VP (VBP are) (SBAR (WHNP (WP what)) (S (VP (VBP make) (NP (NP (NN cream)) (VP (VBG skimming) (NP (NN work)))))))) (. .)))   Conceptually cream skimming has two basic dimensions - product and geography.   Product and geography are what make cream skimming work.    neutral neutral
1   101457  101457e telephone   ( you ( ( know ( during ( ( ( the season ) and ) ( i guess ) ) ) ) ( at ( at ( ( your level ) ( uh ( you ( ( ( lose them ) ( to ( the ( next level ) ) ) ) ( if ( ( if ( they ( decide ( to ( recall ( the ( the ( parent team ) ) ) ) ) ) ) ) ( ( the Braves ) ( decide ( to ( call ( to ( ( recall ( a guy ) ) ( from ( ( triple A ) ( ( ( then ( ( a ( double ( A guy ) ) ) ( ( goes up ) ( to ( replace him ) ) ) ) ) and ) ( ( a ( single ( A guy ) ) ) ( ( goes up ) ( to ( replace him ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ( You ( ( ( ( lose ( the things ) ) ( to ( the ( following level ) ) ) ) ( if ( ( the people ) recall ) ) ) . ) )   (ROOT (S (NP (PRP you)) (VP (VBP know) (PP (IN during) (NP (NP (DT the) (NN season)) (CC and) (NP (FW i) (FW guess)))) (PP (IN at) (IN at) (NP (NP (PRP$ your) (NN level)) (SBAR (S (INTJ (UH uh)) (NP (PRP you)) (VP (VBP lose) (NP (PRP them)) (PP (TO to) (NP (DT the) (JJ next) (NN level))) (SBAR (IN if) (S (SBAR (IN if) (S (NP (PRP they)) (VP (VBP decide) (S (VP (TO to) (VP (VB recall) (NP (DT the) (DT the) (NN parent) (NN team)))))))) (NP (DT the) (NNPS Braves)) (VP (VBP decide) (S (VP (TO to) (VP (VB call) (S (VP (TO to) (VP (VB recall) (NP (DT a) (NN guy)) (PP (IN from) (NP (NP (RB triple) (DT A)) (SBAR (S (S (ADVP (RB then)) (NP (DT a) (JJ double) (NNP A) (NN guy)) (VP (VBZ goes) (PRT (RP up)) (S (VP (TO to) (VP (VB replace) (NP (PRP him))))))) (CC and) (S (NP (DT a) (JJ single) (NNP A) (NN guy)) (VP (VBZ goes) (PRT (RP up)) (S (VP (TO to) (VP (VB replace) (NP (PRP him)))))))))))))))))))))))))))) (ROOT (S (NP (PRP You)) (VP (VBP lose) (NP (DT the) (NNS things)) (PP (TO to) (NP (DT the) (JJ following) (NN level))) (SBAR (IN if) (S (NP (DT the) (NNS people)) (VP (VBP recall))))) (. .))) you know during the season and i guess at at your level uh you lose them to the next level if if they decide to recall the the parent team the Braves decide to call to recall a guy from triple A then a double A guy goes up to replace him and a single A guy goes up to replace him You lose the things to the following level if the people recall.    entailment  entailment
2   134793  134793e fiction ( ( One ( of ( our number ) ) ) ( ( will ( ( ( carry out ) ( your instructions ) ) minutely ) ) . ) )   ( ( ( A member ) ( of ( my team ) ) ) ( ( will ( ( execute ( your orders ) ) ( with ( immense precision ) ) ) ) . ) )   (ROOT (S (NP (NP (CD One)) (PP (IN of) (NP (PRP$ our) (NN number)))) (VP (MD will) (VP (VB carry) (PRT (RP out)) (NP (PRP$ your) (NNS instructions)) (ADVP (RB minutely)))) (. .))) (ROOT (S (NP (NP (DT A) (NN member)) (PP (IN of) (NP (PRP$ my) (NN team)))) (VP (MD will) (VP (VB execute) (NP (PRP$ your) (NNS orders)) (PP (IN with) (NP (JJ immense) (NN precision))))) (. .)))  One of our number will carry out your instructions minutely.    A member of my team will execute your orders with immense precision.    entailment  entailment
3   37397   37397e  fiction ( ( How ( ( ( do you ) know ) ? ) ) ( ( All this ) ( ( ( is ( their information ) ) again ) . ) ) ) ( ( This information ) ( ( belongs ( to them ) ) . ) )  (ROOT (S (SBARQ (WHADVP (WRB How)) (SQ (VBP do) (NP (PRP you)) (VP (VB know))) (. ?)) (NP (PDT All) (DT this)) (VP (VBZ is) (NP (PRP$ their) (NN information)) (ADVP (RB again))) (. .)))   (ROOT (S (NP (DT This) (NN information)) (VP (VBZ belongs) (PP (TO to) (NP (PRP them)))) (. .)))    How do you know? All this is their information again.   This information belongs to them.   entailment  entailment
...

train.tsv中的数据内容共分为12列,

.tsv数据样式:

index   promptID    pairID  genre   sentence1_binary_parse  sentence2_binary_parse  sentence1_parse sentence2_parse sentence1   sentence2
0   31493   31493   travel  ( ( ( ( ( ( ( ( Hierbas , ) ( ans seco ) ) , ) ( ans dulce ) ) , ) and ) frigola ) ( ( ( are just ) ( ( a ( few names ) ) ( worth ( ( keeping ( a look-out ) ) for ) ) ) ) . ) )    ( Hierbas ( ( is ( ( a name ) ( worth ( ( looking out ) for ) ) ) ) . ) )   (ROOT (S (NP (NP (NNS Hierbas)) (, ,) (NP (NN ans) (NN seco)) (, ,) (NP (NN ans) (NN dulce)) (, ,) (CC and) (NP (NN frigola))) (VP (VBP are) (ADVP (RB just)) (NP (NP (DT a) (JJ few) (NNS names)) (PP (JJ worth) (S (VP (VBG keeping) (NP (DT a) (NN look-out)) (PP (IN for))))))) (. .))) (ROOT (S (NP (NNS Hierbas)) (VP (VBZ is) (NP (NP (DT a) (NN name)) (PP (JJ worth) (S (VP (VBG looking) (PRT (RP out)) (PP (IN for))))))) (. .)))    Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.    Hierbas is a name worth looking out for.
1   92164   92164   government  ( ( ( The extent ) ( of ( the ( behavioral effects ) ) ) ) ( ( would ( ( depend ( in ( part ( on ( ( the structure ) ( of ( ( ( the ( individual ( account program ) ) ) and ) ( any limits ) ) ) ) ) ) ) ) ( on ( accessing ( the funds ) ) ) ) ) . ) )    ( ( Many people ) ( ( would ( be ( very ( unhappy ( to ( ( loose control ) ( over ( their ( own money ) ) ) ) ) ) ) ) ) . ) )   (ROOT (S (NP (NP (DT The) (NN extent)) (PP (IN of) (NP (DT the) (JJ behavioral) (NNS effects)))) (VP (MD would) (VP (VB depend) (PP (IN in) (NP (NP (NN part)) (PP (IN on) (NP (NP (DT the) (NN structure)) (PP (IN of) (NP (NP (DT the) (JJ individual) (NN account) (NN program)) (CC and) (NP (DT any) (NNS limits)))))))) (PP (IN on) (S (VP (VBG accessing) (NP (DT the) (NNS funds))))))) (. .))) (ROOT (S (NP (JJ Many) (NNS people)) (VP (MD would) (VP (VB be) (ADJP (RB very) (JJ unhappy) (PP (TO to) (NP (NP (JJ loose) (NN control)) (PP (IN over) (NP (PRP$ their) (JJ own) (NN money)))))))) (. .))) The extent of the behavioral effects would depend in part on the structure of the individual account program and any limits on accessing the funds. Many people would be very unhappy to loose control over their own money.
2   9662    9662    government  ( ( ( Timely access ) ( to information ) ) ( ( is ( in ( ( the ( best interests ) ) ( of ( ( ( both GAO ) and ) ( the agencies ) ) ) ) ) ) . ) )    ( It ( ( ( is ( in ( ( everyone 's ) ( best interest ) ) ) ) ( to ( ( have access ) ( to ( information ( in ( a ( timely manner ) ) ) ) ) ) ) ) . ) )   (ROOT (S (NP (NP (JJ Timely) (NN access)) (PP (TO to) (NP (NN information)))) (VP (VBZ is) (PP (IN in) (NP (NP (DT the) (JJS best) (NNS interests)) (PP (IN of) (NP (NP (DT both) (NNP GAO)) (CC and) (NP (DT the) (NNS agencies))))))) (. .))) (ROOT (S (NP (PRP It)) (VP (VBZ is) (PP (IN in) (NP (NP (NN everyone) (POS 's)) (JJS best) (NN interest))) (S (VP (TO to) (VP (VB have) (NP (NN access)) (PP (TO to) (NP (NP (NN information)) (PP (IN in) (NP (DT a) (JJ timely) (NN manner))))))))) (. .)))   Timely access to information is in the best interests of both GAO and the agencies. It is in everyone's best interest to have access to information in a timely manner.
3   5991    5991    travel  ( ( Based ( in ( ( the ( Auvergnat ( spa town ) ) ) ( of Vichy ) ) ) ) ( , ( ( the ( French government ) ) ( often ( ( ( ( proved ( more zealous ) ) ( than ( its masters ) ) ) ( in ( ( ( suppressing ( civil liberties ) ) and ) ( ( drawing up ) ( anti-Jewish legislation ) ) ) ) ) . ) ) ) ) ) ( ( The ( French government ) ) ( ( passed ( ( anti-Jewish laws ) ( aimed ( at ( helping ( the Nazi ) ) ) ) ) ) . ) )   (ROOT (S (PP (VBN Based) (PP (IN in) (NP (NP (DT the) (NNP Auvergnat) (NN spa) (NN town)) (PP (IN of) (NP (NNP Vichy)))))) (, ,) (NP (DT the) (JJ French) (NN government)) (ADVP (RB often)) (VP (VBD proved) (NP (JJR more) (NNS zealous)) (PP (IN than) (NP (PRP$ its) (NNS masters))) (PP (IN in) (S (VP (VP (VBG suppressing) (NP (JJ civil) (NNS liberties))) (CC and) (VP (VBG drawing) (PRT (RP up)) (NP (JJ anti-Jewish) (NN legislation))))))) (. .))) (ROOT (S (NP (DT The) (JJ French) (NN government)) (VP (VBD passed) (NP (NP (JJ anti-Jewish) (NNS laws)) (VP (VBN aimed) (PP (IN at) (S (VP (VBG helping) (NP (DT the) (JJ Nazi)))))))) (. .))) Based in the Auvergnat spa town of Vichy, the French government often proved more zealous than its masters in suppressing civil liberties and drawing up anti-Jewish legislation.   The French government passed anti-Jewish laws aimed at helping the Nazi.
...

.tsv中的数据内容共分为10列, 与train.tsv的前10列含义相同.

2.7 (QNLI/RTE/WNLI)数据集【判断每对句子的蕴含关系:蕴含、不蕴含】

(QNLI/RTE/WNLI)数据集的任务类型:

(QNLI/RTE/WNLI)数据集文件样式【QNLI, RTE, WNLI三个数据集的样式基本相同】

- (QNLI/RTE/WNLI)/- dev.tsv- test.tsv- train.tsv

在使用中常用到的文件是train.tsv, dev.tsv, test.tsv, 分别代表训练集, 验证集和测试集. 其中train.tsv与dev.tsv数据样式相同, 都是带有标签的数据, 其中test.tsv是不带有标签的数据.

QNLI中的train.tsv数据样式:

index   question    sentence    label
0   When did the third Digimon series begin?    Unlike the two seasons before it and most of the seasons that followed, Digimon Tamers takes a darker and more realistic approach to its story featuring Digimon who do not reincarnate after their deaths and more complex character development in the original Japanese. not_entailment
1   Which missile batteries often have individual launchers several kilometres from one another?    When MANPADS is operated by specialists, batteries may have several dozen teams deploying separately in small sections; self-propelled air defence guns may deploy in pairs.    not_entailment
2   What two things does Popper argue Tarski's theory involves in an evaluation of truth?   He bases this interpretation on the fact that examples such as the one described above refer to two things: assertions and the facts to which they refer.   entailment
3   What is the name of the village 9 miles north of Calafat where the Ottoman forces attacked the Russians?    On 31 December 1853, the Ottoman forces at Calafat moved against the Russian force at Chetatea or Cetate, a small village nine miles north of Calafat, and engaged them on 6 January 1854.  entailment
4   What famous palace is located in London?    London contains four World Heritage Sites: the Tower of London; Kew Gardens; the site comprising the Palace of Westminster, Westminster Abbey, and St Margaret's Church; and the historic settlement of Greenwich (in which the Royal Observatory, Greenwich marks the Prime Meridian, 0° longitude, and GMT).  not_entailment
5   When is the term 'German dialects' used in regard to the German language?   When talking about the German language, the term German dialects is only used for the traditional regional varieties.   entailment
6   What was the name of the island the English traded to the Dutch in return for New Amsterdam?    At the end of the Second Anglo-Dutch War, the English gained New Amsterdam (New York) in North America in exchange for Dutch control of Run, an Indonesian island.  entailment
7   How were the Portuguese expelled from Myanmar?  From the 1720s onward, the kingdom was beset with repeated Meithei raids into Upper Myanmar and a nagging rebellion in Lan Na.  not_entailment
8   What does the word 'customer' properly apply to?    The bill also required rotation of principal maintenance inspectors and stipulated that the word "customer" properly applies to the flying public, not those entities regulated by the FAA. entailment
...

RTE中的train.tsv数据样式:

index   sentence1   sentence2   label
0   No Weapons of Mass Destruction Found in Iraq Yet.   Weapons of Mass Destruction Found in Iraq.  not_entailment
1   A place of sorrow, after Pope John Paul II died, became a place of celebration, as Roman Catholic faithful gathered in downtown Chicago to mark the installation of new Pope Benedict XVI.Pope Benedict XVI is the new leader of the Roman Catholic Church. entailment
2   Herceptin was already approved to treat the sickest breast cancer patients, and the company said, Monday, it will discuss with federal regulators the possibility of prescribing the drug for more breast cancer patients.  Herceptin can be used to treat breast cancer.   entailment
3   Judie Vivian, chief executive at ProMedica, a medical service company that helps sustain the 2-year-old Vietnam Heart Institute in Ho Chi Minh City (formerly Saigon), said that so far about 1,500 children have received treatment.   The previous name of Ho Chi Minh City was Saigon.entailment
4   A man is due in court later charged with the murder 26 years ago of a teenager whose case was the first to be featured on BBC One's Crimewatch. Colette Aram, 16, was walking to her boyfriend's house in Keyworth, Nottinghamshire, on 30 October 1983 when she disappeared. Her body was later found in a field close to her home. Paul Stewart Hutchinson, 50, has been charged with murder and is due before Nottingham magistrates later.  Paul Stewart Hutchinson is accused of having stabbed a girl.    not_entailment
5   Britain said, Friday, that it has barred cleric, Omar Bakri, from returning to the country from Lebanon, where he was released by police after being detained for 24 hours. Bakri was briefly detained, but was released.   entailment
6   Nearly 4 million children who have at least one parent who entered the U.S. illegally were born in the United States and are U.S. citizens as a result, according to the study conducted by the Pew Hispanic Center. That's about three quarters of the estimated 5.5 million children of illegal immigrants inside the United States, according to the study. About 1.8 million children of undocumented immigrants live in poverty, the study found.  Three quarters of U.S. illegal immigrants have children.    not_entailment
7   Like the United States, U.N. officials are also dismayed that Aristide killed a conference called by Prime Minister Robert Malval in Port-au-Prince in hopes of bringing all the feuding parties together.  Aristide had Prime Minister Robert Malval  murdered in Port-au-Prince.  not_entailment
8   WASHINGTON --  A newly declassified narrative of the Bush administration's advice to the CIA on harsh interrogations shows that the small group of Justice Department lawyers who wrote memos authorizing controversial interrogation techniques were operating not on their own but with direction from top administration officials, including then-Vice President Dick Cheney and national security adviser Condoleezza Rice. At the same time, the narrative suggests that then-Defense Secretary Donald H. Rumsfeld and then-Secretary of State Colin Powell were largely left out of the decision-making process. Dick Cheney was the Vice President of Bush. entailment

WNLI中的train.tsv数据样式:

index   sentence1   sentence2   label
0   I stuck a pin through a carrot. When I pulled the pin out, it had a hole.   The carrot had a hole.  1
1   John couldn't see the stage with Billy in front of him because he is so short.  John is so short.   1
2   The police arrested all of the gang members. They were trying to stop the drug trade in the neighborhood.   The police were trying to stop the drug trade in the neighborhood.  1
3   Steve follows Fred's example in everything. He influences him hugely.   Steve influences him hugely.    0
4   When Tatyana reached the cabin, her mother was sleeping. She was careful not to disturb her, undressing and climbing back into her berth.   mother was careful not to disturb her, undressing and climbing back into her berth. 0
5   George got free tickets to the play, but he gave them to Eric, because he was particularly eager to see it. George was particularly eager to see it.    0
6   John was jogging through the park when he saw a man juggling watermelons. He was very impressive.   John was very impressive.   0
7   I couldn't put the pot on the shelf because it was too tall.    The pot was too tall.   1
8   We had hoped to place copies of our newsletter on all the chairs in the auditorium, but there were simply not enough of them.   There were simply not enough copies of the newsletter.  1

(QNLI/RTE/WNLI)中的train.tsv数据样式说明:train.tsv中的数据内容共分为4列,

QNLI中的test.tsv数据样式:

index   question    sentence
0   What organization is devoted to Jihad against Israel?   For some decades prior to the First Palestine Intifada in 1987, the Muslim Brotherhood in Palestine took a "quiescent" stance towards Israel, focusing on preaching, education and social services, and benefiting from Israel's "indulgence" to build up a network of mosques and charitable organizations.
1   In what century was the Yarrow-Schlick-Tweedy balancing system used?    In the late 19th century, the Yarrow-Schlick-Tweedy balancing 'system' was used on some marine triple expansion engines.
2   The largest brand of what store in the UK is located in Kingston Park?  Close to Newcastle, the largest indoor shopping centre in Europe, the MetroCentre, is located in Gateshead.
3   What does the IPCC rely on for research?    In principle, this means that any significant new evidence or events that change our understanding of climate science between this deadline and publication of an IPCC report cannot be included.
4   What is the principle about relating spin and space variables?  Thus in the case of two fermions there is a strictly negative correlation between spatial and spin variables, whereas for two bosons (e.g. quanta of electromagnetic waves, photons) the correlation is strictly positive.
5   Which network broadcasted Super Bowl 50 in the U.S.?    CBS broadcast Super Bowl 50 in the U.S., and charged an average of $5 million for a 30-second commercial during the game.
6   What did the museum acquire from the Royal College of Science?  To link this to the rest of the museum, a new entrance building was constructed on the site of the former boiler house, the intended site of the Spiral, between 1978 and 1982.
7   What is the name of the old north branch of the Rhine?  From Wijk bij Duurstede, the old north branch of the Rhine is called Kromme Rijn ("Bent Rhine") past Utrecht, first Leidse Rijn ("Rhine of Leiden") and then, Oude Rijn ("Old Rhine").
8   What was one of Luther's most personal writings?    It remains in use today, along with Luther's hymns and his translation of the Bible.
...

(RTE/WNLI)中的test.tsv数据样式:

index   sentence1   sentence2
0   Maude and Dora had seen the trains rushing across the prairie, with long, rolling puffs of black smoke streaming back from the engine. Their roars and their wild, clear whistles could be heard from far away. Horses ran away when they came in sight.    Horses ran away when Maude and Dora came in sight.
1   Maude and Dora had seen the trains rushing across the prairie, with long, rolling puffs of black smoke streaming back from the engine. Their roars and their wild, clear whistles could be heard from far away. Horses ran away when they came in sight.    Horses ran away when the trains came in sight.
2   Maude and Dora had seen the trains rushing across the prairie, with long, rolling puffs of black smoke streaming back from the engine. Their roars and their wild, clear whistles could be heard from far away. Horses ran away when they came in sight.    Horses ran away when the puffs came in sight.
3   Maude and Dora had seen the trains rushing across the prairie, with long, rolling puffs of black smoke streaming back from the engine. Their roars and their wild, clear whistles could be heard from far away. Horses ran away when they came in sight.    Horses ran away when the roars came in sight.
4   Maude and Dora had seen the trains rushing across the prairie, with long, rolling puffs of black smoke streaming back from the engine. Their roars and their wild, clear whistles could be heard from far away. Horses ran away when they came in sight.    Horses ran away when the whistles came in sight.
5   Maude and Dora had seen the trains rushing across the prairie, with long, rolling puffs of black smoke streaming back from the engine. Their roars and their wild, clear whistles could be heard from far away. Horses ran away when they came in sight.    Horses ran away when the horses came in sight.
6   Maude and Dora had seen the trains rushing across the prairie, with long, rolling puffs of black smoke streaming back from the engine. Their roars and their wild, clear whistles could be heard from far away. Horses ran away when they saw a train coming.   Maude and Dora saw a train coming.
7   Maude and Dora had seen the trains rushing across the prairie, with long, rolling puffs of black smoke streaming back from the engine. Their roars and their wild, clear whistles could be heard from far away. Horses ran away when they saw a train coming.   The trains saw a train coming.
8   Maude and Dora had seen the trains rushing across the prairie, with long, rolling puffs of black smoke streaming back from the engine. Their roars and their wild, clear whistles could be heard from far away. Horses ran away when they saw a train coming.   The puffs saw a train coming.
...

(QNLI/RTE/WNLI)中的test.tsv数据样式说明:test.tsv中的数据内容共分为3列,

三、NLP中的常用预训练模型

当下NLP中流行的预训练模型:

所有上述预训练模型及其变体都是以为基础,只是在模型结构如神经元连接方式,编码器隐层数,多头注意力的头数等发生改变,这些改变方式的大部分依据都是由在标准数据集上的表现而定。

因此,对于我们使用者而言,不需要从理论上深度探究这些预训练模型的结构设计的优劣,只需要在自己处理的目标数据上,尽量遍历所有可用的模型对比得到最优效果即可.

1、BERT及其变体

bert-base-

bert-large-

bert-base-cased

bert-large-cased

bert-base--

bert-large--

bert-base-

2、GPT

-gpt: 编码器具有12个隐层, 输出768维张量, 12个自注意力头, 共110M参数量, 由在英文语料上进行训练而得到.

3、GPT-2及其变体

gpt2

gpt2-xl

4、-XL

-xl-wt103

5、XLNet及其变体

xlnet-base-cased

xlnet-large-cased

nlp算法工程师_nlp执行师证书的含金量_

6、XLM

xlm-mlm-en-2048

7、及其变体

-base

-large

8、及其变体

-base-

-base--cased

9、

-base-v1

-base-v2

10、T5及其变体

t5-small

t5-base

t5-large

11、XLM-及其变体

xlm--base

xlm--large

四、加载和使用预训练模型

使用torch.hub工具进行模型的加载和使用。这些预训练模型由世界先进的NLP研发团队提供。

加载和使用预训练模型的步骤

1、第一步: 确定需要加载的预训练模型并安装依赖包

能够加载哪些模型:NLP中的常用预训练模型

这里假设我们处理的是中文文本任务, 需要加载的模型是BERT的中文模型: bert-base-

在使用工具加载模型前需要安装必备的依赖包:

pip install tqdm boto3 requests regex sentencepiece sacremoses tokenizers

2、第二步: 加载预训练模型的映射器

import torchsource = 'huggingface/pytorch-transformers'  # 预训练模型来源
model_name = 'bert-base-chinese'  # 加载的预训练模型的名字# 一、加载预训练模型的映射器tokenizer【将字符串文本中的各个词汇序列化为各个词汇在model_name词汇表中的序列】
tokenizer = torch.hub.load(repo_or_dir=source, model=model_name, part="tokenizer")  # 参数含义【source:预训练模型来源; part:选定加载模型的哪一部分, 这里是模型的映射器; model_name:加载的预训练模型的名字】

3、第三步: 加载带/不带头的预训练模型

加载预训练模型时我们可以选择带头或者不带头的模型,这里的 “头” 是指模型的任务输出层。

import torchrepo = "huggingface/transformers"  # 预训练模型远程 GitHub仓库名【https://github.com/huggingface/transformers】
model_name = "bert-base-chinese"  # 加载的预训练模型的名字# 一、加载预训练模型的映射器tokenizer【将字符串文本中的各个词汇序列化为各个词汇在model_name词汇表中的序列】
tokenizer = torch.hub.load(repo, "tokenizer", model_name)  # 参数含义【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是模型的映射器; 第三个参数:加载的预训练模型的名字】# 二、 加载带/不带头的预训练模型
model = torch.hub.load(repo, "model", model_name)  # 加载不带头的预训练模型【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是“不带头的预训练模型 ”; 第三个参数:加载的预训练模型的名字】
lm_model = torch.hub.load(repo, "modelWithLMHead", model_name)  # 加载带有语言模型头的预训练模型【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是“带有语言模型头的预训练模型 ”; 第三个参数:加载的预训练模型的名字】
classification_model = torch.hub.load(repo, "modelForSequenceClassification", model_name)  # 加载带有类模型头的预训练模型【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是“带有类模型头的预训练模型 ”; 第三个参数:加载的预训练模型的名字】
qa_model = torch.hub.load(repo, "modelForQuestionAnswering", model_name)  # 加载带有问答模型头的预训练模型【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是“带有问答模型头的预训练模型 ”; 第三个参数:加载的预训练模型的名字】

4、第四步: 使用模型获得输出结果 4.1 使用不带头的模型进行输出【获得BERT预训练模型已经训练好的词向量】

不带头的Bert模型本质可以把其看做是新的。对于现有的任务,只需把BERT的输出看做是,在其之上建立自己的模型即可了。

import torchrepo = "huggingface/transformers"  # 预训练模型远程 GitHub仓库名【https://github.com/huggingface/transformers】
model_name = "bert-base-chinese"  # 加载的预训练模型的名字# 一、加载预训练模型的映射器tokenizer【将字符串文本中的各个词汇序列化为各个词汇在model_name词汇表中的序列】
tokenizer = torch.hub.load(repo, "tokenizer", model_name)  # 参数含义【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是模型的映射器; 第三个参数:加载的预训练模型的名字】# 二、 加载不带头的预训练模型
model = torch.hub.load(repo, "model", model_name)  # 加载不带头的预训练模型【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是“不带头的预训练”模块; 第三个参数:加载的预训练模型的名字】# 三、使用tokenizer进行数值映射,将字符串型转为数值型张量
input_text = "人生该如何起头"  # 输入的中文文本
indexed_tokens = tokenizer.encode(input_text)  # 使用tokenizer进行数值映射
print("tokenizer映射后的结果:indexed_tokens = ", indexed_tokens)  # tokenizer映射后的结果:[101, 782, 4495, 6421, 1963, 862, 6629, 1928, 102] 【 101和102是起止符, 中间的每个数字对应"人生该如何起头"的每个字】
tokens_tensor = torch.tensor([indexed_tokens])  # 将映射结构转化为张量输送给不带头的预训练模型# 四、使用模型获得输出结果:使用不带头的模型进行输出
with torch.no_grad():model_result = model(tokens_tensor)  # 使用不带头的预训练模型获得结果
last_hidden_state = model_result.last_hidden_state
pooler_output = model_result.pooler_output
print("不带头的模型----model---->type(model_result) = {0}".format(type(model_result)))
print("不带头的模型----model---->model_result = {0}".format(model_result))
print("不带头的模型----model---->last_hidden_state.shape = {0}".format(last_hidden_state.shape))
print("不带头的模型----model---->pooler_output.shape = {0}".format(pooler_output.shape))

Downloading: "https://github.com/huggingface/pytorch-transformers/archive/master.zip" to C:\Users\Admin/.cache\torch\hub\master.zip
Downloading: 100%|██████████| 624/624 [00:00<00:00, 625kB/s]
Downloading: 100%|██████████| 110k/110k [00:00<00:00, 132kB/s] 
Downloading: 100%|██████████| 269k/269k [00:01<00:00, 160kB/s]
PreTrainedTokenizerFast(name_or_path='bert-base-chinese', vocab_size=21128, model_max_len=512, is_fast=True, padding_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'})
Using cache found in C:\Users\Admin/.cache\torch\hub\huggingface_pytorch-transformers_master
Downloading: 100%|██████████| 412M/412M [03:52<00:00, 1.77MB/s]
Using cache found in C:\Users\Admin/.cache\torch\hub\huggingface_pytorch-transformers_master
C:\Users\Admin/.cache\torch\hub\huggingface_pytorch-transformers_master\src\transformers\models\auto\modeling_auto.py:1001: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.FutureWarning,
Some weights of the model checkpoint at bert-base-chinese were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Using cache found in C:\Users\Admin/.cache\torch\hub\huggingface_pytorch-transformers_master
Some weights of the model checkpoint at bert-base-chinese were not used when initializing BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
- This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-chinese and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Using cache found in C:\Users\Admin/.cache\torch\hub\huggingface_pytorch-transformers_master
Some weights of the model checkpoint at bert-base-chinese were not used when initializing BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
- This IS expected if you are initializing BertForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForQuestionAnswering were not initialized from the model checkpoint at bert-base-chinese and are newly initialized: ['qa_outputs.weight', 'qa_outputs.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.tokenizer映射后的结果:indexed_tokens = [101, 782, 4495, 6421, 1963, 862, 6629, 1928, 102]不带头的模型----model---->type(model_result) = <class 'transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions'>不带头的模型----model---->model_result = BaseModelOutputWithPoolingAndCrossAttentions(last_hidden_state=tensortensor([[[ 0.5421,  0.4526, -0.0179,  ...,  1.0447, -0.1140,  0.0068],[-0.1343,  0.2785,  0.1602,  ..., -0.0345, -0.1646, -0.2186],[ 0.9960, -0.5121, -0.6229,  ...,  1.4173,  0.5533, -0.2681],...,[ 0.0115,  0.2150, -0.0163,  ...,  0.6445,  0.2452, -0.3749],[ 0.8649,  0.4337, -0.1867,  ...,  0.7397, -0.2636,  0.2144],[-0.6207,  0.1668,  0.1561,  ...,  1.1218, -0.0985, -0.0937]]]), pooler_output=tensor([[ 0.9997,  1.0000,  0.9930,  0.9879,  0.8739,  0.9836, -0.9358, -0.9864,0.9831, -0.9975,  1.0000,  0.9884, -0.4969, -0.9495,  0.9994, -0.9998,-0.4415,  0.9989,  0.9817,  0.0225,  0.9999, -1.0000, -0.9757,  0.6133,..............-0.9897, -0.9983, -0.9969, -0.9963,  0.9405]]), hidden_states=None, past_key_values=None, attentions=None, cross_attentions=None)不带头的模型----model---->last_hidden_state.shape = torch.Size([1, 9, 768])
不带头的模型----model---->pooler_output.shape = torch.Size([1, 768])

4.2 使用带有 “语言模型头” 的模型进行输出

import torchrepo = "huggingface/transformers"  # 预训练模型远程 GitHub仓库名【https://github.com/huggingface/transformers】
model_name = "bert-base-chinese"  # 加载的预训练模型的名字# 一、加载预训练模型的映射器tokenizer【将字符串文本中的各个词汇序列化为各个词汇在model_name词汇表中的序列】
tokenizer = torch.hub.load(repo, "tokenizer", model_name)  # 参数含义【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是模型的映射器; 第三个参数:加载的预训练模型的名字】# 二、 加载带头的预训练模型
lm_model = torch.hub.load(repo, "modelWithLMHead", model_name)  # 加载带有语言模型头的预训练模型【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是“语言模型头” 模块; 第三个参数:加载的预训练模型的名字】# 三、使用tokenizer进行数值映射,将字符串型转为数值型张量
input_text = "人生该如何起头"  # 输入的中文文本
indexed_tokens = tokenizer.encode(input_text)  # 使用tokenizer进行数值映射
print("tokenizer映射后的结果:indexed_tokens = ", indexed_tokens)  # tokenizer映射后的结果:[101, 782, 4495, 6421, 1963, 862, 6629, 1928, 102] 【 101和102是起止符, 中间的每个数字对应"人生该如何起头"的每个字】
tokens_tensor = torch.tensor([indexed_tokens])  # 将映射结构转化为张量输送给不带头的预训练模型# 四、使用模型获得输出结果:使用带有 “语言模型头” 的模型进行输出
with torch.no_grad():lm_output = lm_model(tokens_tensor)
logits = lm_output.logits
print("带有 “语言模型头” 的模型----lm_model---->type(lm_output) = {0}".format(type(lm_output)))
print("带有 “语言模型头” 的模型----lm_model---->lm_output = {0}".format(lm_output))
print("带有 “语言模型头” 的模型----lm_model---->logits.shape = {0}".format(logits.shape))

tokenizer映射后的结果:indexed_tokens =  [101, 782, 4495, 6421, 1963, 862, 6629, 1928, 102]带有 “语言模型头” 的模型----lm_model---->type(lm_output) = <class 'transformers.modeling_outputs.MaskedLMOutput'>带有 “语言模型头” 的模型----lm_model---->lm_output = MaskedLMOutput(loss=None, logits=tensor([[[ -7.9706,  -7.9119,  -7.9317,  ...,  -7.2174,  -7.0263,  -7.3746],[ -8.2097,  -8.1810,  -8.0645,  ...,  -7.2349,  -6.9283,  -6.9856],[-13.7458, -13.5978, -12.6076,  ...,  -7.6817,  -9.5642, -11.9928],...,[ -9.0928,  -8.6858,  -8.4648,  ...,  -8.2368,  -7.5684, -10.2419],[ -8.9458,  -8.5784,  -8.6325,  ...,  -7.0547,  -5.3288,  -7.8077],[ -8.4154,  -8.5217,  -8.5379,  ...,  -6.7102,  -5.9782,  -7.6909]]]), hidden_states=None, attentions=None)带有 “语言模型头” 的模型----lm_model---->logits.shape = torch.Size([1, 9, 21128])

4.3 使用带有 “分类模型头” 的模型进行输出

import torchrepo = "huggingface/transformers"  # 预训练模型远程 GitHub仓库名【https://github.com/huggingface/transformers】
model_name = "bert-base-chinese"  # 加载的预训练模型的名字# 一、加载预训练模型的映射器tokenizer【将字符串文本中的各个词汇序列化为各个词汇在model_name词汇表中的序列】
tokenizer = torch.hub.load(repo, "tokenizer", model_name)  # 参数含义【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是模型的映射器; 第三个参数:加载的预训练模型的名字】# 二、 加载带/不带头的预训练模型
classification_model = torch.hub.load(repo, "modelForSequenceClassification", model_name)  # 加载带有类模型头的预训练模型【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是“分类模型头” 模块; 第三个参数:加载的预训练模型的名字】# 三、使用tokenizer进行数值映射,将字符串型转为数值型张量
input_text = "人生该如何起头"  # 输入的中文文本
indexed_tokens = tokenizer.encode(input_text)  # 使用tokenizer进行数值映射
print("tokenizer映射后的结果:indexed_tokens = ", indexed_tokens)  # tokenizer映射后的结果:[101, 782, 4495, 6421, 1963, 862, 6629, 1928, 102] 【 101和102是起止符, 中间的每个数字对应"人生该如何起头"的每个字】
tokens_tensor = torch.tensor([indexed_tokens])  # 将映射结构转化为张量输送给不带头的预训练模型# 四、使用模型获得输出结果:使用带有 ”分类模型头“ 的模型进行输出
with torch.no_grad():classification_output = classification_model(tokens_tensor)
logits = classification_output.logits
print("带有 ”分类模型头“ 的模型----classification_model---->type(classification_output) = {0}".format(type(classification_output)))
print("带有 ”分类模型头“ 的模型----classification_model---->classification_output = {0}".format(classification_output))
print("带有 ”分类模型头“ 的模型----classification_model---->logits.shape = {0}".format(logits.shape))

打印结果:

tokenizer映射后的结果:indexed_tokens =  [101, 782, 4495, 6421, 1963, 862, 6629, 1928, 102]带有 ”分类模型头“ 的模型----classification_model---->type(classification_output) = <class 'transformers.modeling_outputs.SequenceClassifierOutput'>带有 ”分类模型头“ 的模型----classification_model---->classification_output = SequenceClassifierOutput(loss=None, logits=tensor([[-0.0814, -0.7647]]), hidden_states=None, attentions=None)带有 ”分类模型头“ 的模型----classification_model---->logits.shape = torch.Size([1, 2])

4.4 使用带有 “问答模型头” 的模型进行输出

import torchrepo = "huggingface/transformers"  # 预训练模型远程 GitHub仓库名【https://github.com/huggingface/transformers】
model_name = "bert-base-chinese"  # 加载的预训练模型的名字# 一、加载预训练模型的映射器tokenizer【将字符串文本中的各个词汇序列化为各个词汇在model_name词汇表中的序列】
tokenizer = torch.hub.load(repo, "tokenizer", model_name)  # 参数含义【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是模型的映射器; 第三个参数:加载的预训练模型的名字】# 二、 加载带头的预训练模型
qa_model = torch.hub.load(repo, "modelForQuestionAnswering", model_name)  # 加载带有问答模型头的预训练模型【第一个参数:预训练模型来源; 第二个参数:选定加载模型的哪一部分, 这里是“问答模型头” 模块; 第三个参数:加载的预训练模型的名字】# 三、使用tokenizer进行数值映射,将字符串型转为数值型张量
input_text1 = "我家的小狗是黑色的"
input_text2 = "我家的小狗是什么颜色的呢?"
indexed_tokens = tokenizer.encode(input_text1, input_text2)  # 映射两个句子
print("句子对tokenizer映射后的结果:indexed_tokens = ", indexed_tokens)
segments_ids = [0] * 11 + [1] * 14  # 用 0,1来区分第一条和第二条句子【0 来代表第一条句子(9个汉字的数字化代码+开始符101+结束符102,共11个字符);用 1 来代表第二条句子(13个汉字的数字化代码+结束符102,共14个字符)】segments_tensors = torch.tensor([segments_ids])  # 转化张量形式
tokens_tensor = torch.tensor([indexed_tokens])  # 转化张量形式# 四、使用模型获得输出结果
with torch.no_grad():qa_model_result = qa_model(tokens_tensor, token_type_ids=segments_tensors)  # 使用带有问答模型头的预训练模型获得结果
start_logits =  qa_model_result.start_logits
end_logits = qa_model_result.end_logits
print("带有 “问答模型头” 的模型----qa_model---->type(qa_model_result) = {0}".format(type(qa_model_result)))
print("带有 “问答模型头” 的模型----qa_model---->qa_model_result = {0}".format(qa_model_result))
print("带有 “问答模型头” 的模型----qa_model---->start_logits.shape = {0}".format(start_logits.shape))
print("带有 “问答模型头” 的模型----qa_model---->end_logits.shape = {0}".format(end_logits.shape))

模型的输出为两个形状1x25的张量, 他们是两条句子合并长度的概率分布【25=(9+2)+(13+1)】

打印结果:

句子对tokenizer映射后的结果:indexed_tokens =  [101, 2769, 2157, 4638, 2207, 4318, 3221, 7946, 5682, 4638, 102, 2769, 2157, 4638, 2207, 4318, 3221, 784, 720, 7582, 5682, 4638, 1450, 136, 102]带有 “问答模型头” 的模型----qa_model---->type(qa_model_result) = <class 'transformers.modeling_outputs.QuestionAnsweringModelOutput'>带有 “问答模型头” 的模型----qa_model---->qa_model_result = QuestionAnsweringModelOutput(loss=None, start_logits=tensor([[ 1.0115,  0.5367,  0.2698,  0.1581,  0.2114,  0.1591, -0.1055,  0.2319,  0.2572,  0.0679,  0.3059,  0.4906,  0.3669,  0.5004,  0.4368,  0.2729,  -0.0703, -0.0466,  0.2149,  0.1240,  0.1702,  0.2397,  0.0068,  0.0245,   0.3059]]), end_logits=tensor([[ 1.7496,  0.5301,  0.6449,  0.2216,  0.5858,  0.7002,  0.4426,  0.3600,  0.4163, -0.1553,  1.3499,  0.7014,  0.7405,  0.3920,  0.8280,  0.8707,  0.5957,  0.9251,  0.8773,  0.4594,  0.7127, -0.1657, -0.0331,  0.3084,  1.3499]]), hidden_states=None, attentions=None)带有 “问答模型头” 的模型----qa_model---->start_logits.shape = torch.Size([1, 25])带有 “问答模型头” 的模型----qa_model---->end_logits.shape = torch.Size([1, 25])

5、手动下载Bert的预训练模型并加载训练 5.1 进入网址 搜索自己需要的模型名(下面以bert-base- 为例)

5.2 在如下的界面中,找到一个 Files and

5.3 进入下载界面

(这里以下载版的模型为例)下载如下三个红框中的内容即可。

5.4 具体使用

我将上述下载好的内容放到了 /home///bert-base- 文件夹下。那么我就可以在程序中这么用:

import torch
from transformers import BertModel, BertTokenizer# 加载bert的分词器
tokenizer = BertTokenizer.from_pretrained('F:/Pretrained_Model/pytorch/bert-base-chinese')
# 加载bert模型,这个路径文件夹下有bert_config.json配置文件和model.bin模型权重文件
model = BertModel.from_pretrained('F:/Pretrained_Model/pytorch/bert-base-chinese')# 三、使用tokenizer进行数值映射,将字符串型转为数值型张量
input_text = "人生该如何起头"  # 输入的中文文本
indexed_tokens = tokenizer.encode(input_text)  # 使用tokenizer进行数值映射
print("tokenizer映射后的结果:indexed_tokens = ", indexed_tokens)  # tokenizer映射后的结果:[101, 782, 4495, 6421, 1963, 862, 6629, 1928, 102] 【 101和102是起止符, 中间的每个数字对应"人生该如何起头"的每个字】
tokens_tensor = torch.tensor([indexed_tokens])  # 将映射结构转化为张量输送给不带头的预训练模型# 四、使用模型获得输出结果:使用不带头的模型进行输出
with torch.no_grad():model_result = model(tokens_tensor)  # 使用不带头的预训练模型获得结果print("不带头的模型----model---->type(model_result) = {0}".format(type(model_result)))print("不带头的模型----model---->model_result[0].shape = {0}---->model_result[0] = {1}".format(model_result[0].shape, model_result[0]))print("不带头的模型----model---->model_result[1].shape = {0}---->model_result[1] = {1}".format(model_result[1].shape, model_result[1]))

打印结果:

tokenizer映射后的结果:indexed_tokens =  [101, 782, 4495, 6421, 1963, 862, 6629, 1928, 102]
不带头的模型----model---->type(model_result) = <class 'tuple'>
不带头的模型----model---->model_result[0].shape = torch.Size([1, 9, 768])---->model_result[0] = tensor([[[ 0.5421,  0.4526, -0.0179,  ...,  1.0447, -0.1140,  0.0068],[-0.1343,  0.2785,  0.1602,  ..., -0.0345, -0.1646, -0.2186],[ 0.9960, -0.5121, -0.6229,  ...,  1.4173,  0.5533, -0.2681],...,[ 0.0115,  0.2150, -0.0163,  ...,  0.6445,  0.2452, -0.3749],[ 0.8649,  0.4337, -0.1867,  ...,  0.7397, -0.2636,  0.2144],[-0.6207,  0.1668,  0.1561,  ...,  1.1218, -0.0985, -0.0937]]])
不带头的模型----model---->model_result[1].shape = torch.Size([1, 768])---->model_result[1] = tensor([[ 0.9997,  1.0000,  0.9930,  0.9879,  0.8739,  0.9836, -0.9358, -0.9864,0.9831, -0.9975,  1.0000,  0.9884, -0.4969, -0.9495,  0.9994, -0.9998,-0.4415,  0.9989,  0.9817,  0.0225,  0.9999, -1.0000, -0.9757,  0.6133,0.7982,  0.9967,  0.9546, -0.8866, -1.0000,  0.9970,  0.9631,  0.9994,...,-0.9966,  0.9998, -0.9941,  0.9923, -0.9995,  0.7561,  0.8380,  0.8948,-0.9951,  1.0000,  0.9387, -0.9897, -0.9983, -0.9969, -0.9963,  0.9405]])

关于我们

最火推荐

小编推荐

联系我们


版权声明:本站内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 88@qq.com 举报,一经查实,本站将立刻删除。备案号:桂ICP备2021009421号
Powered By Z-BlogPHP.
复制成功
微信号:
我知道了