In Abstrat part of this Paper, it wrotes "with no parameter updates or task-specific tem-plates"
I thought this project a new method for "Prompt Tuning" through meta-learning, aiming at providing a better Prompt/Instruction than regular In-Context Learning.
But in the code, in "model.do_train()", it updates the model's parameters by BP (loss.backward()) ? Is is still a type of FT (fine-tune) ?
If I change my base-LM to something HUGE like GPT-3 176B,it costs too much.
In Abstrat part of this Paper, it wrotes "with no parameter updates or task-specific tem-plates"
I thought this project a new method for "Prompt Tuning" through meta-learning, aiming at providing a better Prompt/Instruction than regular In-Context Learning.
But in the code, in "model.do_train()", it updates the model's parameters by BP (loss.backward()) ? Is is still a type of FT (fine-tune) ?
If I change my base-LM to something HUGE like GPT-3 176B,it costs too much.