Name | Modified | Size | Downloads / Week |
---|---|---|---|
Parent folder | |||
README.md | 2024-06-28 | 20.9 kB | |
v3.0.0-beta0 source code.tar.gz | 2024-06-28 | 20.4 MB | |
v3.0.0-beta0 source code.zip | 2024-06-28 | 24.1 MB | |
Totals: 3 Items | 44.5 MB | 1 |
What's Changed
- [dist]pip requirements-dev.txt by @Liujie0926 in https://github.com/PaddlePaddle/PaddleNLP/pull/8258
- add scaling by @lugimzzz in https://github.com/PaddlePaddle/PaddleNLP/pull/8256
- [LLM]Support Gemma model by @Southpika in https://github.com/PaddlePaddle/PaddleNLP/pull/8082
- [BugFix] Try except sequence parallel utils by @DesmonDay in https://github.com/PaddlePaddle/PaddleNLP/pull/8189
- Update CodeCov GitHub Action by @sijunhe in https://github.com/PaddlePaddle/PaddleNLP/pull/8268
- [AutoParallel] Open recompute strategy for llama model by @zhangbo9674 in https://github.com/PaddlePaddle/PaddleNLP/pull/8265
- Fix sharding < 100 limitation bug by @sneaxiy in https://github.com/PaddlePaddle/PaddleNLP/pull/8146
- use tensor.shape bug not paddle.shape(tensor) by @wanghuancoder in https://github.com/PaddlePaddle/PaddleNLP/pull/8260
- [dist CI]update paddlenlp install for CI by @Liujie0926 in https://github.com/PaddlePaddle/PaddleNLP/pull/8267
- [Bug Fix]Fix merge parameters in pp by @Southpika in https://github.com/PaddlePaddle/PaddleNLP/pull/8239
- [LLM] add memory stats to logger of trainer by @SylarTiaNII in https://github.com/PaddlePaddle/PaddleNLP/pull/8269
- Add p2p_comm_overlap for Llama-2-70b benchmark. by @Xreki in https://github.com/PaddlePaddle/PaddleNLP/pull/8276
- add a100 test ground truth by @zhiqiu in https://github.com/PaddlePaddle/PaddleNLP/pull/8249
- [paddle-pipelines] faq semantic search question answering reamde by @w5688414 in https://github.com/PaddlePaddle/PaddleNLP/pull/8292
- [paddle-pipelines] Add pipelines documentation by @w5688414 in https://github.com/PaddlePaddle/PaddleNLP/pull/8308
- Support llama-3 by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8307
- [Distributed] [CustomDevices] Adapt SP on lora && polish MC2 APIs by @SylarTiaNII in https://github.com/PaddlePaddle/PaddleNLP/pull/8303
- fix bug for fp16 + delay_scale_loss_scale + sharding_stage1_overlap by @FeixLiu in https://github.com/PaddlePaddle/PaddleNLP/pull/8314
- [paddle-pipelines] Update mkdocs by @w5688414 in https://github.com/PaddlePaddle/PaddleNLP/pull/8310
- [benchmark]update llama2_ips by @Liujie0926 in https://github.com/PaddlePaddle/PaddleNLP/pull/8322
- [dist CI]fix before_hook by @Liujie0926 in https://github.com/PaddlePaddle/PaddleNLP/pull/8283
- benchmark llama worker=1 by @wanghuancoder in https://github.com/PaddlePaddle/PaddleNLP/pull/8305
- 【AutoParallel】Add llama2 UT for auto-parallel by @heavyrain-lzy in https://github.com/PaddlePaddle/PaddleNLP/pull/8300
- Add system env log for llama test by @zhangbo9674 in https://github.com/PaddlePaddle/PaddleNLP/pull/8321
- [LLM] Support fuse attention q, k, v weights by @DrownFish19 in https://github.com/PaddlePaddle/PaddleNLP/pull/8202
- [Distributed] fix lora by @SylarTiaNII in https://github.com/PaddlePaddle/PaddleNLP/pull/8325
- fix try import by @w5688414 in https://github.com/PaddlePaddle/PaddleNLP/pull/8332
- [DEV] Support sync params in tensor parallel config by @From00 in https://github.com/PaddlePaddle/PaddleNLP/pull/8311
- cherry pick paddlenlp 2.8 by @w5688414 in https://github.com/PaddlePaddle/PaddleNLP/pull/8323
- textfeature_queryinput by @cxa-unique in https://github.com/PaddlePaddle/PaddleNLP/pull/8331
- [BugFix] Fix gpu ci by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8337
- [Trainer] Fix sharding overlap bug by @DesmonDay in https://github.com/PaddlePaddle/PaddleNLP/pull/8333
- [Tokenizer]Add Chat template by @Southpika in https://github.com/PaddlePaddle/PaddleNLP/pull/8226
- [AutoParallel]Refine lr warm_up configuration strategy for llama by @zhangbo9674 in https://github.com/PaddlePaddle/PaddleNLP/pull/8329
- Add num_hidden_layer config for llama run_pretrain by @zhangbo9674 in https://github.com/PaddlePaddle/PaddleNLP/pull/8288
- [XPU] llama add xpu support by @dynamicheart in https://github.com/PaddlePaddle/PaddleNLP/pull/8282
- add eliminate_transpose arg by @zhiqiu in https://github.com/PaddlePaddle/PaddleNLP/pull/8339
- change llama/modeling.py to opt npu performence by @Galaxy1458 in https://github.com/PaddlePaddle/PaddleNLP/pull/8342
- Update llm docs requirements by @w5688414 in https://github.com/PaddlePaddle/PaddleNLP/pull/8336
- Disable eval and predict for llama-2 benchmark. by @Xreki in https://github.com/PaddlePaddle/PaddleNLP/pull/8366
- update by @Galaxy1458 in https://github.com/PaddlePaddle/PaddleNLP/pull/8359
- [LLM] fix lora target modules on llama by @SylarTiaNII in https://github.com/PaddlePaddle/PaddleNLP/pull/8372
- [paddle-pipelines] Update offline ann by @w5688414 in https://github.com/PaddlePaddle/PaddleNLP/pull/8353
- refine benchmard bert ips stat by @wanghuancoder in https://github.com/PaddlePaddle/PaddleNLP/pull/8361
- [BugFix] Update truncate in distributed training by @KB-Ding in https://github.com/PaddlePaddle/PaddleNLP/pull/8362
- [dist benchmark]Fix llama2 benchmark by @Liujie0926 in https://github.com/PaddlePaddle/PaddleNLP/pull/8376
- Revert "update" by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8389
- Fix test init by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8377
- [Performance] Optimize unified checkpoint save/load speed. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8204
- [npu model bug]fix_global_bug by @Galaxy1458 in https://github.com/PaddlePaddle/PaddleNLP/pull/8399
- [Bugfix] Fix fast tokenizer import error by @w5688414 in https://github.com/PaddlePaddle/PaddleNLP/pull/8367
- [bugfix] fix uie by @w5688414 in https://github.com/PaddlePaddle/PaddleNLP/pull/8379
- fit for llama3 for auto_parallel by @zhiqiu in https://github.com/PaddlePaddle/PaddleNLP/pull/8395
- [DistDataloader] Update implementation, add nested.py by @DesmonDay in https://github.com/PaddlePaddle/PaddleNLP/pull/8380
- [LLM] Fix fuse or split with same key by @DrownFish19 in https://github.com/PaddlePaddle/PaddleNLP/pull/8378
- [UC] Fix compatible with npu by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8409
- pre copy pinned data to gpu by @wanghuancoder in https://github.com/PaddlePaddle/PaddleNLP/pull/8386
- Refine position_ids for auto parallel training of llama by @zhangbo9674 in https://github.com/PaddlePaddle/PaddleNLP/pull/8363
- [Distributed] enable tensor_parallel_output for finetuning by @SylarTiaNII in https://github.com/PaddlePaddle/PaddleNLP/pull/8370
- fix type promotion problem. by @zxcd in https://github.com/PaddlePaddle/PaddleNLP/pull/8414
- Fix ckpt done by @gongel in https://github.com/PaddlePaddle/PaddleNLP/pull/8402
- [LLM] rename logits_tensor_parallel_output to avoid conflict by @SylarTiaNII in https://github.com/PaddlePaddle/PaddleNLP/pull/8419
- [Trainer] fix distdataloader by @DesmonDay in https://github.com/PaddlePaddle/PaddleNLP/pull/8420
- fix safe open. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8422
- adapter new type promotion rule for Paddle 2.6 by @zxcd in https://github.com/PaddlePaddle/PaddleNLP/pull/8421
- [BugFix] Fix llama3
eot_id
by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8371 - add npu-llama-opt0-script by @Galaxy1458 in https://github.com/PaddlePaddle/PaddleNLP/pull/8401
- [LLM] add assertion for enable_stage1_overlap in lora mode by @SylarTiaNII in https://github.com/PaddlePaddle/PaddleNLP/pull/8425
- [NPU]Custom fusion operator unification by @Galaxy1458 in https://github.com/PaddlePaddle/PaddleNLP/pull/8431
- delete csrc/generation/reset_need_stop_value.cc by @yuanlehome in https://github.com/PaddlePaddle/PaddleNLP/pull/8413
- Update llama_npu_opt_lora.sh by @Galaxy1458 in https://github.com/PaddlePaddle/PaddleNLP/pull/8439
- [CI]add scripts for unittest by @Liujie0926 in https://github.com/PaddlePaddle/PaddleNLP/pull/8433
- fix npu sft ckpt load bug and no FA bug by @NINGBENZHE in https://github.com/PaddlePaddle/PaddleNLP/pull/8438
- Fix CI bugs by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8430
- Fix/test gpu by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8452
- Support fused_attention_qkv for auto_parallel llama by @zhangbo9674 in https://github.com/PaddlePaddle/PaddleNLP/pull/8432
- [BugFix] Fix load rng compatibility. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8450
- update by @Galaxy1458 in https://github.com/PaddlePaddle/PaddleNLP/pull/8448
- [GCU] Support llama for GCU by @EnflameGCU in https://github.com/PaddlePaddle/PaddleNLP/pull/8445
- [bugfix] fix erniedoc by @w5688414 in https://github.com/PaddlePaddle/PaddleNLP/pull/8393
- [benchmark]Add llama2 auto by @Liujie0926 in https://github.com/PaddlePaddle/PaddleNLP/pull/8424
- Add llama2-70b for test_tipc by @zhangbo9674 in https://github.com/PaddlePaddle/PaddleNLP/pull/8455
- Fix ci tests. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8471
- [NPU] support npu llama2-13B export & inference by @ronny1996 in https://github.com/PaddlePaddle/PaddleNLP/pull/8442
- [LLM] fix bug when loss is None in llama modeling.py by @cqulilujia in https://github.com/PaddlePaddle/PaddleNLP/pull/8459
- fix rotary_emb for llama by @EnflameGCU in https://github.com/PaddlePaddle/PaddleNLP/pull/8470
- [Ops] RoPE kernel support theta input by @yinfan98 in https://github.com/PaddlePaddle/PaddleNLP/pull/8440
- Support Sharding Overlap by @iosmers in https://github.com/PaddlePaddle/PaddleNLP/pull/8473
- Revert "Support Sharding Overlap (#8473)" by @SylarTiaNII in https://github.com/PaddlePaddle/PaddleNLP/pull/8491
- fix run_benchmark for llama2_70b in auto_parallel by @fightfat in https://github.com/PaddlePaddle/PaddleNLP/pull/8484
- 【AutoParallel】Add split_backward for vpp by @heavyrain-lzy in https://github.com/PaddlePaddle/PaddleNLP/pull/8479
- Quick fix from_pretrained. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8486
- Fix rng_state in llm models by @zhangyuqin1998 in https://github.com/PaddlePaddle/PaddleNLP/pull/8396
- [AutoParallel] Support qwen for auto_parallel by @GhostScreaming in https://github.com/PaddlePaddle/PaddleNLP/pull/8312
- modify block_multihead_attention api by @ming1753 in https://github.com/PaddlePaddle/PaddleNLP/pull/8456
- [LLM] disable part of MC2 in lora by @SylarTiaNII in https://github.com/PaddlePaddle/PaddleNLP/pull/8505
- Update model_utils.py by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8509
- Update merge_lora_params.py by @Galaxy1458 in https://github.com/PaddlePaddle/PaddleNLP/pull/8514
- [fea] moe support by @bo-ke in https://github.com/PaddlePaddle/PaddleNLP/pull/8498
- Add Sharding V1 broadcast and V2 allgather overlap optimize by @iosmers in https://github.com/PaddlePaddle/PaddleNLP/pull/8499
- [fix] Broadcast optimizer state using broadcast_dp without shard-resh… by @bo-ke in https://github.com/PaddlePaddle/PaddleNLP/pull/8522
- Update README.md by @wawltor in https://github.com/PaddlePaddle/PaddleNLP/pull/8524
- [Safetensors] Fix fast safe open slice. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8512
- Update Benchmark scripts by @iosmers in https://github.com/PaddlePaddle/PaddleNLP/pull/8519
- fix eval. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8529
- [BugFix][NPU] fix llama attn_mask astype error by @tianhaodongbd in https://github.com/PaddlePaddle/PaddleNLP/pull/8528
- fused_ln:Added implementation for the HIP platform by @asr-sheep1 in https://github.com/PaddlePaddle/PaddleNLP/pull/8472
- [CI] Update pip source. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8540
- [PIP] Update run_ci.sh by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8552
- add mteb evaluation by @cxa-unique in https://github.com/PaddlePaddle/PaddleNLP/pull/8538
- [Cherry-pick] Add release grad & sharding format & decorate_exclude_layers by @ForFishes in https://github.com/PaddlePaddle/PaddleNLP/pull/8545
- Add RingFlashAttention for context parallel by @zhangyuqin1998 in https://github.com/PaddlePaddle/PaddleNLP/pull/8383
- fix codecov conflicts by @greycooker in https://github.com/PaddlePaddle/PaddleNLP/pull/8555
- support fused weights for export_model by @ronny1996 in https://github.com/PaddlePaddle/PaddleNLP/pull/8554
- 【benchmark】 add llama-7b_auto_dp2mp2pp2 benchmark script for cinn by @mmglove in https://github.com/PaddlePaddle/PaddleNLP/pull/8423
- Fix memory leak bug by @sneaxiy in https://github.com/PaddlePaddle/PaddleNLP/pull/8546
- Update sequence_parallel for predict by @DesmonDay in https://github.com/PaddlePaddle/PaddleNLP/pull/8551
- [GPT][CE] Update modeling.py by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8548
- add fuse_attention_ffn support for qwen by @deepllz in https://github.com/PaddlePaddle/PaddleNLP/pull/8526
- Update generation_utils.py by @carryyu in https://github.com/PaddlePaddle/PaddleNLP/pull/8502
- fix llama export by @ronny1996 in https://github.com/PaddlePaddle/PaddleNLP/pull/8561
- Update llama_npu_opt_lora.sh by @Galaxy1458 in https://github.com/PaddlePaddle/PaddleNLP/pull/8562
- [FIX DDP] fix ddp by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8549
- [AutoParallel] Add benchmark for llama-7b-dy2st. by @GhostScreaming in https://github.com/PaddlePaddle/PaddleNLP/pull/8559
- [Cherry pick] Sharding reshard function enhancement by @sneaxiy in https://github.com/PaddlePaddle/PaddleNLP/pull/8544
- [BugFix] Fix test_long_sequence_strategies by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8568
- Fix/ci pip by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8541
- Add async save for optimizer by @ForFishes in https://github.com/PaddlePaddle/PaddleNLP/pull/8557
- add llama & qwen dpo by @lugimzzz in https://github.com/PaddlePaddle/PaddleNLP/pull/8474
- [LLM] support Qwen2 by @DrownFish19 in https://github.com/PaddlePaddle/PaddleNLP/pull/8338
- [LLM] Fix Qwen2 by @DrownFish19 in https://github.com/PaddlePaddle/PaddleNLP/pull/8584
- fix autotunner benchmark error and fix llama2 dy2st benchmark by @fightfat in https://github.com/PaddlePaddle/PaddleNLP/pull/8587
- fix autoruner resume case by @Difers in https://github.com/PaddlePaddle/PaddleNLP/pull/8259
- Enable test with re-try. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8590
- [xpu] add xpu custom ops support for llama2-7b by @NeroLoh in https://github.com/PaddlePaddle/PaddleNLP/pull/8515
- xpu devices support llama-7b basic mode inference (turn on BlockAtten… by @zhink in https://github.com/PaddlePaddle/PaddleNLP/pull/8588
- Add Pipeline Parallel for PPO training and support generation with InferenceModel by @guoshengCS in https://github.com/PaddlePaddle/PaddleNLP/pull/7953
- [xpu] change xpu setup.py to paddlenlp_ops by @NeroLoh in https://github.com/PaddlePaddle/PaddleNLP/pull/8595
- Clean RLHF main script by @guoshengCS in https://github.com/PaddlePaddle/PaddleNLP/pull/8596
- Fix dataset with empty char. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8469
- XPU open ir pass by @zhink in https://github.com/PaddlePaddle/PaddleNLP/pull/8598
- [bug fix] fix sharding stage1 allgather overlap bug, which needs to forbiden pin memory by @iosmers in https://github.com/PaddlePaddle/PaddleNLP/pull/8594
- Add main process print function by @ForFishes in https://github.com/PaddlePaddle/PaddleNLP/pull/8604
- [Feature] Optimize config saving. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8490
- to_json_string兼容性升级 by @sneaxiy in https://github.com/PaddlePaddle/PaddleNLP/pull/8608
- [PaddleNLP 3.0] [Release] Refactor examples by @DrownFish19 in https://github.com/PaddlePaddle/PaddleNLP/pull/8609
- finetune support continue_training by @tianhaodongbd in https://github.com/PaddlePaddle/PaddleNLP/pull/8615
- [PaddleNLP 3.0] Refactor/3 part1- remove fast tokenizer. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8613
- Repo adjustment by @wtmlon in https://github.com/PaddlePaddle/PaddleNLP/pull/8605
- [PaddleNLP 3.0] Refactor, merge examples/language_model model_zoo to legacy/model_zoo by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8614
- [PaddleNLP 3.0] Refactor RLHF by @gongel in https://github.com/PaddlePaddle/PaddleNLP/pull/8617
- Remove delay_scale_loss and release_grads for llama-2 13B's benchmark. by @Xreki in https://github.com/PaddlePaddle/PaddleNLP/pull/8623
- [PaddleNLP 3.0] Fix dead link by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8626
- Update PaddleNLP to fix PPO by @sneaxiy in https://github.com/PaddlePaddle/PaddleNLP/pull/8618
- [LLM] support sparse attention for LLAMA by @GuoxiaWang in https://github.com/PaddlePaddle/PaddleNLP/pull/8592
- remove fast generation by @wtmlon in https://github.com/PaddlePaddle/PaddleNLP/pull/8625
- fix npu llama by @zhink in https://github.com/PaddlePaddle/PaddleNLP/pull/8628
- [PaddleNLP 3.0] Refactor/3 part3, move pipelines. by @ZHUI in https://github.com/PaddlePaddle/PaddleNLP/pull/8619
- [PaddleNLP 3.0] update dataset preprocess by @DrownFish19 in https://github.com/PaddlePaddle/PaddleNLP/pull/8629
- [LLM] Support prefix tuning and lora for qwen2 by @DrownFish19 in https://github.com/PaddlePaddle/PaddleNLP/pull/8601
- modify path of model_zoo in ci_case_auto.sh and ci_case_dy.sh by @jeff41404 in https://github.com/PaddlePaddle/PaddleNLP/pull/8633
- 【benchmark】 fix model_zoo path by @mmglove in https://github.com/PaddlePaddle/PaddleNLP/pull/8643
- [PaddleNLP 3.0] [LLM] change llm content by @lugimzzz in https://github.com/PaddlePaddle/PaddleNLP/pull/8627
- [LLM] Add sequence_parallel support for qwen by @Difers in https://github.com/PaddlePaddle/PaddleNLP/pull/8558
- [NPU][LLM] add README & reformat llama scripts by @SylarTiaNII in https://github.com/PaddlePaddle/PaddleNLP/pull/8642
- align llama auto_parallel dataloader with manual_parallel by @zhiqiu in https://github.com/PaddlePaddle/PaddleNLP/pull/8639
- fix fast_ln compile error by @deepllz in https://github.com/PaddlePaddle/PaddleNLP/pull/8650
- Apache License by @DrownFish19 in https://github.com/PaddlePaddle/PaddleNLP/pull/8658
- Fix different length for numpy>=1.24.x by @DrownFish19 in https://github.com/PaddlePaddle/PaddleNLP/pull/8655
- [LLM][NPU] fix on readme by @SylarTiaNII in https://github.com/PaddlePaddle/PaddleNLP/pull/8659
- [DOC] Fix dead link by @DrownFish19 in https://github.com/PaddlePaddle/PaddleNLP/pull/8662
- fix benchmark dir because of PR#8627 by @fightfat in https://github.com/PaddlePaddle/PaddleNLP/pull/8649
- fix llama alibi pretrain by @lugimzzz in https://github.com/PaddlePaddle/PaddleNLP/pull/8668
- inference support llama3(wint8|4/a8w8) by @yuanlehome in https://github.com/PaddlePaddle/PaddleNLP/pull/8630
- 【benchmark】 fix benchmark script by @mmglove in https://github.com/PaddlePaddle/PaddleNLP/pull/8648
- [cpu]llama avx model inference supports by @bukejiyu in https://github.com/PaddlePaddle/PaddleNLP/pull/8634
- 【AutoParallel】Change benchmark config for llama2-7b by @heavyrain-lzy in https://github.com/PaddlePaddle/PaddleNLP/pull/8667
- support flashmask by @lugimzzz in https://github.com/PaddlePaddle/PaddleNLP/pull/8670
- [PaddleNLP 3.0] Update README.md by @DrownFish19 in https://github.com/PaddlePaddle/PaddleNLP/pull/8666
- adjust llm readme by @lugimzzz in https://github.com/PaddlePaddle/PaddleNLP/pull/8672
- Update export model by @DesmonDay in https://github.com/PaddlePaddle/PaddleNLP/pull/8671
- Update version by @gongel in https://github.com/PaddlePaddle/PaddleNLP/pull/8675
- Sft flash mask by @wtmlon in https://github.com/PaddlePaddle/PaddleNLP/pull/8664
- Update version by @gongel in https://github.com/PaddlePaddle/PaddleNLP/pull/8676
New Contributors
- @Southpika made their first contribution in https://github.com/PaddlePaddle/PaddleNLP/pull/8082
- @cxa-unique made their first contribution in https://github.com/PaddlePaddle/PaddleNLP/pull/8331
- @dynamicheart made their first contribution in https://github.com/PaddlePaddle/PaddleNLP/pull/8282
- @EnflameGCU made their first contribution in https://github.com/PaddlePaddle/PaddleNLP/pull/8445
- @cqulilujia made their first contribution in https://github.com/PaddlePaddle/PaddleNLP/pull/8459
- @yinfan98 made their first contribution in https://github.com/PaddlePaddle/PaddleNLP/pull/8440
- @zhangyuqin1998 made their first contribution in https://github.com/PaddlePaddle/PaddleNLP/pull/8396
- @ming1753 made their first contribution in https://github.com/PaddlePaddle/PaddleNLP/pull/8456
- @asr-sheep1 made their first contribution in https://github.com/PaddlePaddle/PaddleNLP/pull/8472
- @NeroLoh made their first contribution in https://github.com/PaddlePaddle/PaddleNLP/pull/8515
- @bukejiyu made their first contribution in https://github.com/PaddlePaddle/PaddleNLP/pull/8634
Full Changelog: https://github.com/PaddlePaddle/PaddleNLP/compare/v2.8.1...v3.0.0-beta0