Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix typos #3742

Open
wants to merge 97 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
97 commits
Select commit Hold shift + click to select a range
80b3f90
fix typos
RainRat May 4, 2023
e979f11
Merge branch 'main' of https://github.com/RainRat/Open-Assistant
RainRat May 4, 2023
6fbea2e
Merge branch 'main' of https://github.com/RainRat/Open-Assistant
RainRat May 5, 2023
f93b9b8
Merge branch 'main' of https://github.com/RainRat/Open-Assistant
RainRat May 6, 2023
29bc2cf
Merge branch 'main' of https://github.com/RainRat/Open-Assistant
RainRat May 7, 2023
8130017
Merge branch 'main' of https://github.com/RainRat/Open-Assistant
RainRat May 7, 2023
80da028
Merge branch 'main' of https://github.com/RainRat/Open-Assistant
RainRat May 7, 2023
8f876ba
Merge branch 'LAION-AI:main' into main
RainRat May 8, 2023
82d330d
Merge branch 'LAION-AI:main' into main
RainRat May 8, 2023
79aaef6
Merge branch 'LAION-AI:main' into main
RainRat May 10, 2023
cb3f63f
Merge branch 'LAION-AI:main' into main
RainRat May 10, 2023
148208d
Merge branch 'LAION-AI:main' into main
RainRat May 11, 2023
abb7e0e
Merge branch 'LAION-AI:main' into main
RainRat May 11, 2023
0231c38
Merge branch 'LAION-AI:main' into main
RainRat May 12, 2023
02999e5
Merge branch 'LAION-AI:main' into main
RainRat May 12, 2023
3cf47d4
Merge branch 'LAION-AI:main' into main
RainRat May 13, 2023
8519b9f
Merge branch 'LAION-AI:main' into main
RainRat May 13, 2023
e65b985
Merge branch 'LAION-AI:main' into main
RainRat May 14, 2023
0126e99
Merge branch 'LAION-AI:main' into main
RainRat May 14, 2023
c1b9ade
Merge branch 'LAION-AI:main' into main
RainRat May 15, 2023
bd98526
Merge branch 'LAION-AI:main' into main
RainRat May 15, 2023
9080ee1
Merge branch 'LAION-AI:main' into main
RainRat May 16, 2023
3baf679
Merge branch 'LAION-AI:main' into main
RainRat May 17, 2023
6f960dc
Merge branch 'LAION-AI:main' into main
RainRat May 19, 2023
63a11db
Merge branch 'LAION-AI:main' into main
RainRat May 21, 2023
95f33ec
Merge branch 'LAION-AI:main' into main
RainRat May 22, 2023
6dfba94
fix typos
RainRat May 23, 2023
223ac1a
try to fix minor formatting issues caused by typo fixes
RainRat May 23, 2023
4aef508
try fix formatting another way
RainRat May 23, 2023
25c4338
one more try at fix formatting
RainRat May 23, 2023
b809ff0
Merge branch 'LAION-AI:main' into main
RainRat May 23, 2023
879f3de
Merge branch 'LAION-AI:main' into main
RainRat May 24, 2023
32d6f80
Merge branch 'LAION-AI:main' into main
RainRat May 25, 2023
91c1d86
Merge branch 'LAION-AI:main' into main
RainRat May 27, 2023
ab7af81
Merge branch 'LAION-AI:main' into main
RainRat May 27, 2023
69d6d13
Merge branch 'LAION-AI:main' into main
RainRat May 29, 2023
818b91d
Merge branch 'LAION-AI:main' into main
RainRat May 29, 2023
e745730
Merge branch 'LAION-AI:main' into main
RainRat May 30, 2023
767c951
Merge branch 'LAION-AI:main' into main
RainRat May 31, 2023
4a71ac2
Merge branch 'LAION-AI:main' into main
RainRat Jun 2, 2023
007b402
Merge branch 'LAION-AI:main' into main
RainRat Jun 2, 2023
6d9bb3a
Merge branch 'LAION-AI:main' into main
RainRat Jun 3, 2023
c141778
Merge branch 'LAION-AI:main' into main
RainRat Jun 5, 2023
aa445df
Merge branch 'LAION-AI:main' into main
RainRat Jun 6, 2023
62baf9b
Merge branch 'LAION-AI:main' into main
RainRat Jun 7, 2023
3cec7c5
Merge branch 'main' of https://github.com/RainRat/Open-Assistant
RainRat Jun 7, 2023
51a7173
Merge branch 'main' of https://github.com/RainRat/Open-Assistant
RainRat Jun 8, 2023
9495904
Merge branch 'LAION-AI:main' into main
RainRat Jun 9, 2023
021e58e
Merge branch 'LAION-AI:main' into main
RainRat Jun 10, 2023
19af304
Merge branch 'LAION-AI:main' into main
RainRat Jun 11, 2023
e2c622f
Merge branch 'LAION-AI:main' into main
RainRat Jun 12, 2023
b27c80c
Merge branch 'LAION-AI:main' into main
RainRat Jun 12, 2023
72225a5
Merge branch 'LAION-AI:main' into main
RainRat Jun 12, 2023
504812d
Merge branch 'LAION-AI:main' into main
RainRat Jun 13, 2023
e2eb672
Merge branch 'LAION-AI:main' into main
RainRat Jun 14, 2023
11e65f6
Merge branch 'LAION-AI:main' into main
RainRat Jun 14, 2023
037e6b8
Merge branch 'LAION-AI:main' into main
RainRat Jun 16, 2023
bdce3bf
Merge branch 'LAION-AI:main' into main
RainRat Jun 17, 2023
fd45186
Merge branch 'LAION-AI:main' into main
RainRat Jun 18, 2023
5cb66e8
Merge branch 'LAION-AI:main' into main
RainRat Jun 19, 2023
44d83d4
Merge branch 'LAION-AI:main' into main
RainRat Jun 20, 2023
ff060cb
Merge branch 'LAION-AI:main' into main
RainRat Jun 20, 2023
ddf63a6
Merge branch 'LAION-AI:main' into main
RainRat Jun 23, 2023
ac71c2b
Merge branch 'LAION-AI:main' into main
RainRat Jun 26, 2023
779974e
Merge branch 'LAION-AI:main' into main
RainRat Jun 30, 2023
39876a5
Merge branch 'LAION-AI:main' into main
RainRat Jul 4, 2023
73ce63f
fix typos
RainRat Jul 4, 2023
2a953c6
change whitespace to follow formatting
RainRat Jul 4, 2023
e15bedf
Merge branch 'LAION-AI:main' into main
RainRat Jul 12, 2023
e94feec
Merge branch 'LAION-AI:main' into main
RainRat Jul 16, 2023
2c0a502
Merge branch 'LAION-AI:main' into main
RainRat Jul 19, 2023
a34dd3a
Merge branch 'LAION-AI:main' into main
RainRat Jul 20, 2023
8211688
Merge branch 'LAION-AI:main' into main
RainRat Jul 22, 2023
cf10fbc
Merge branch 'LAION-AI:main' into main
RainRat Jul 24, 2023
36f3b21
Merge branch 'LAION-AI:main' into main
RainRat Jul 25, 2023
94f11df
Merge branch 'LAION-AI:main' into main
RainRat Jul 25, 2023
70994f7
Merge branch 'LAION-AI:main' into main
RainRat Jul 26, 2023
3222c35
Merge branch 'LAION-AI:main' into main
RainRat Jul 29, 2023
7e2a121
Merge branch 'LAION-AI:main' into main
RainRat Jul 31, 2023
d4e7d9a
Merge branch 'LAION-AI:main' into main
RainRat Aug 1, 2023
33185ba
Merge branch 'LAION-AI:main' into main
RainRat Aug 3, 2023
01014be
Merge branch 'LAION-AI:main' into main
RainRat Aug 8, 2023
5af213d
Merge branch 'LAION-AI:main' into main
RainRat Aug 11, 2023
a13b8c7
Merge branch 'LAION-AI:main' into main
RainRat Aug 15, 2023
c4a1991
Merge branch 'LAION-AI:main' into main
RainRat Aug 16, 2023
7700b3b
Merge branch 'LAION-AI:main' into main
RainRat Aug 18, 2023
76ef43c
Merge branch 'LAION-AI:main' into main
RainRat Aug 20, 2023
e1920f8
Merge branch 'LAION-AI:main' into main
RainRat Aug 24, 2023
1db5b4e
Merge branch 'LAION-AI:main' into main
RainRat Sep 1, 2023
8080da7
Merge branch 'LAION-AI:main' into main
RainRat Oct 27, 2023
2a6bd34
Merge branch 'LAION-AI:main' into main
RainRat Nov 14, 2023
dceac8b
Merge branch 'LAION-AI:main' into main
RainRat Nov 26, 2023
bb784e8
fix typos
RainRat Jan 2, 2024
1db0301
Merge branch 'LAION-AI:main' into main
RainRat Jan 6, 2024
eeb9cc3
fix typos
RainRat Feb 29, 2024
c8b61d6
fix typos
RainRat Apr 26, 2024
87f9acf
Update diverse.ipynb
RainRat Apr 30, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions backend/oasst_backend/tree_manager.py
Expand Up @@ -1085,7 +1085,7 @@ def _query_need_review(

def query_prompts_need_review(self, lang: str) -> list[Message]:
"""
Select initial prompt messages with less then required rankings in active message tree
Select initial prompt messages with less than required rankings in active message tree
(active == True in message_tree_state)
"""
return self._query_need_review(
Expand All @@ -1094,7 +1094,7 @@ def query_prompts_need_review(self, lang: str) -> list[Message]:

def query_replies_need_review(self, lang: str) -> list[Message]:
"""
Select child messages (parent_id IS NOT NULL) with less then required rankings
Select child messages (parent_id IS NOT NULL) with less than required rankings
in active message tree (active == True in message_tree_state)
"""
return self._query_need_review(message_tree_state.State.GROWING, self.cfg.num_reviews_reply, False, lang)
Expand Down
2 changes: 1 addition & 1 deletion data/datasets/TSSB-3M/generate_dataset.py
Expand Up @@ -117,7 +117,7 @@ def clean(text):


def clean_PII(text):
# Remove sign-off messege generated by `git commit --signoff`, eg. "Signed-off-by: user_name <xx@yy.zz.com>"
# Remove sign-off message generated by `git commit --signoff`, eg. "Signed-off-by: user_name <xx@yy.zz.com>"
signoff_index = text.rfind("\n\nSigned-off-by:")
if signoff_index != -1:
# Remove the sign-off string from the commit message
Expand Down
Expand Up @@ -26,7 +26,7 @@
"id": "K9sCPQzIb278"
},
"source": [
"### DOWLOAD THE DATASET"
"### DOWNLOAD THE DATASET"
]
},
{
Expand Down Expand Up @@ -156,7 +156,7 @@
"id": "3MxfnNxX2n0m"
},
"source": [
"### GENERATE THE SUMMARIES AND ANOTATE THE DATASET"
"### GENERATE THE SUMMARIES AND ANNOTATE THE DATASET"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion data/datasets/recipes/tasty_recipes.ipynb
Expand Up @@ -158,7 +158,7 @@
" for i, instruction in enumerate(ingredient_and_instructions[row[\"slug\"]][\"instructions\"]):\n",
" instructions += f\"\\n{i+1}. {convert_fraction_unicode_chars_to_strings(instruction['display_text'])}\"\n",
"\n",
" # Constuct the full response\n",
" # Construct the full response\n",
" response = f\"\"\"Here's a recipe for {recipe_name}:\n",
"\n",
"Ingredients:\n",
Expand Down
6 changes: 3 additions & 3 deletions data/datasets/safety_directory/child_help/child_help.py
Expand Up @@ -195,7 +195,7 @@
"Ligne Verte 147 Madagascar": {
"region": "Madagascar",
"page": "https://childhelplineinternational.org/madagascar-ligne-verte-147-madagascar/",
"description": "Ligne Verte 147 is a child helpline for reporting cases of mistreatment, violence, abuse and exploitation against children and is is free, available 24/7 and accessible everywhere in Madagascar.",
"description": "Ligne Verte 147 is a child helpline for reporting cases of mistreatment, violence, abuse and exploitation against children and is free, available 24/7 and accessible everywhere in Madagascar.",
"contacts": {
"Website": {"type": "website", "link": "https://arozaza.mg/"},
"147": {"type": "phone", "link": "tel:147"},
Expand Down Expand Up @@ -529,7 +529,7 @@
"Línea Libre": {
"region": "Chile",
"page": "https://childhelplineinternational.org/chile-linea-libre/",
"description": "Línea Libre is is a psychological support channel aimed at girls, boys and young people, which is attended directly by psychologists trained to contain, guide, intervene in crises, and address mental health concerns or rights violations. It is available Monday to Saturday from 10:00 a.m. to 10:00 p.m. through three channels: phone email, and chat via our app.",
"description": "Línea Libre is a psychological support channel aimed at girls, boys and young people, which is attended directly by psychologists trained to contain, guide, intervene in crises, and address mental health concerns or rights violations. It is available Monday to Saturday from 10:00 a.m. to 10:00 p.m. through three channels: phone email, and chat via our app.",
"contacts": {
"Website": {"type": "website", "link": "http://www.linealibre.cl/"},
"1515": {"type": "phone", "link": "tel:1515"},
Expand Down Expand Up @@ -2110,7 +2110,7 @@
"Hotline 919": {
"region": "Qatar",
"page": "https://childhelplineinternational.org/qatar-hotline-919/",
"description": "Hotline 919 provides provides free confidential consultations (social, psychological and legal) for women and children and also provides support to protect and rehabilitate children and women who are victims of violence and family breakdown.",
"description": "Hotline 919 provides free confidential consultations (social, psychological and legal) for women and children and also provides support to protect and rehabilitate children and women who are victims of violence and family breakdown.",
"contacts": {
"Website": {"type": "website", "link": "http://www.aman.org.qa/"},
"919": {"type": "phone", "link": "tel:919"},
Expand Down
@@ -1,5 +1,5 @@
/**
* Developper console script used to generate the associated json file.
* Developer console script used to generate the associated json file.
* Wikipedia URL : https://en.wikipedia.org/wiki/List_of_suicide_crisis_lines
* Author : Lucas Oulieu
*/
Expand Down
2 changes: 1 addition & 1 deletion data/datasets/tv_dialogue/README.md
Expand Up @@ -47,7 +47,7 @@ How's it going?
on Huggingface!

They are examples on Huggingface.
CUT OUT TO ANOTHER SCENCE
CUT OUT TO ANOTHER SCENE

We are somewhere else
[PERSON 1 (v.o)] I wonder where we are?
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/architecture/inference.md
Expand Up @@ -111,7 +111,7 @@ The inference server is built around [FastAPI](https://fastapi.tiangolo.com/).
for any other currently pending messages in the chat to
`inference.MessageState.cancelled`.
3. After updating the `message` table, we create a RedisQueue for this
specific message and enque the message.
specific message and enqueue the message.
4. Finally, we return an `inference.MessageRead` (a Pydantic model) to the
client. This is the object contains the needed `message_id`.

Expand Down
2 changes: 1 addition & 1 deletion inference/server/oasst_inference_server/compliance.py
Expand Up @@ -57,7 +57,7 @@ async def run_compliance_check(websocket: fastapi.WebSocket, worker_id: str, wor
Run a compliance check for the given worker:
- Find a suitable compliance check assistant message
- Task the worker with generating a response with the same context
- Compare the respons against the existing completed message
- Compare the response against the existing completed message
- Update the database with the outcome
"""
async with deps.manual_create_session() as session:
Expand Down
2 changes: 1 addition & 1 deletion model/model_eval/manual/create_synth_import.py
Expand Up @@ -83,7 +83,7 @@ def main():
reply_texts.add(m.text)

if len(unique_replies) < 2:
print("Skipping enty with < 2 unique replies")
print("Skipping entry with < 2 unique replies")
continue

prompt_message = ExportMessageNode(
Expand Down
2 changes: 1 addition & 1 deletion model/model_training/custom_datasets/formatting.py
Expand Up @@ -74,7 +74,7 @@ def system_tag(

shuffle(properties)

# ensure that potentially multi-line conext field comes last
# ensure that potentially multi-line context field comes last
if self.context:
properties.append(("context", self.context))

Expand Down
2 changes: 1 addition & 1 deletion model/model_training/models/__init__.py
Expand Up @@ -2,7 +2,7 @@


def freeze_top_n_layers(model, target_layers):
# its possible we can simply detect which module is a ModuleList
# it's possible we can simply detect which module is a ModuleList
# and simply freeze the module without doing string parsing
for name, param in model.named_parameters():
if "embed" in name:
Expand Down
2 changes: 1 addition & 1 deletion model/model_training/models/patching_falcon.py
Expand Up @@ -19,7 +19,7 @@ def falcon_forward_with_flash_attn(
) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
"""
head_mask, alibi & output_attention are not supported.
Reference to the original `FalconAttention.forwad()` method which this patch replaces:
Reference to the original `FalconAttention.forward()` method which this patch replaces:
https://github.com/huggingface/transformers/blob/c965d302791cf935d6ea7776428749be678cf509/src/transformers/models/falcon/modeling_falcon.py#L281
"""

Expand Down
2 changes: 1 addition & 1 deletion model/pretokenizer/README.md
Expand Up @@ -19,7 +19,7 @@ python -m pip install ../../oasst-data/

### Configuration

The datamix to proces can be configured with one or multiple sections in the
The datamix to process can be configured with one or multiple sections in the
`configs/pretokenize.yaml` file.

### Example usage
Expand Down
2 changes: 1 addition & 1 deletion model/pretokenizer/tokenizer.py
Expand Up @@ -310,7 +310,7 @@ def bos(self):
def eod(self):
if self._eod_id is not None:
return self._eod_id
return self._eos_id # in case noe eod we can patch this up with an eos
return self._eos_id # in case no eod we can patch this up with an eos

@property
def eos_token_id(self):
Expand Down
4 changes: 2 additions & 2 deletions notebooks/TSSB-3M-bugs-dataset/TSSB-3M-bugs_dataset.ipynb
Expand Up @@ -707,10 +707,10 @@
"\n",
"g = Github()\n",
"\n",
"# TO DO, find a way to get a commmit from SHA\n",
"# TO DO, find a way to get a commit from SHA\n",
"# 1. Use GitHub API\n",
"# 2. Download repos with their history\n",
"# 3. Web scaping"
"# 3. Web scraping"
]
},
{
Expand Down
Expand Up @@ -201,7 +201,7 @@
},
"outputs": [],
"source": [
"# Make grammar erros (more like: change random words into words of similar meaning)\n",
"# Make grammar errors (more like: change random words into words of similar meaning)\n",
"import nltk\n",
"from nltk.corpus import wordnet\n",
"import random\n",
Expand Down
6 changes: 3 additions & 3 deletions notebooks/data-augmentation/unified-qa/unified-qa.ipynb
Expand Up @@ -1004,7 +1004,7 @@
"metadata": {},
"outputs": [],
"source": [
"random.seed(20) # for reproduciablity"
"random.seed(20) # for reproducibility"
]
},
{
Expand All @@ -1025,7 +1025,7 @@
"def convert_unified_qa(dataset_url):\n",
" # download using pandas\n",
" ds = pd.read_csv(dataset_url, on_bad_lines=\"skip\", names=[\"Question\", \"Answer\"], sep=\"\\t\")\n",
" # get name for metatdata\n",
" # get name for metadata\n",
" ds_name = dataset_url.split(\"/unifiedqa/data/\")[1].split(\"/\")[0]\n",
" # get conversation templates list\n",
" conv_funcs = converter_functions[ds_name]\n",
Expand All @@ -1038,7 +1038,7 @@
" answer = item.Answer\n",
" if question == np.nan or answer == np.nan:\n",
" print(\"Skipped\")\n",
" # get a random conversation generatore function\n",
" # get a random conversation generator function\n",
" conv_func = random.choice(conv_funcs)\n",
" try:\n",
" conv_list = conv_func(question, answer)\n",
Expand Down
14 changes: 7 additions & 7 deletions notebooks/data-augmentation/wikidata-qa/wikidata.ipynb
Expand Up @@ -810,7 +810,7 @@
" \"{sub} is used mostly for {a}.\",\n",
" \"{name} is mostly known for {a}.\",\n",
" ],\n",
" \"P487\": [\"{a}\", \"The {name} emoji is {a}.\", \"The {a} character repesents {name}.\"],\n",
" \"P487\": [\"{a}\", \"The {name} emoji is {a}.\", \"The {a} character represents {name}.\"],\n",
" \"P509\": [\"{name} died of {a}.\", \"The cause of {pos} death was {a}.\"],\n",
" \"P527\": [\"{name} are made of {a}.\", \"They are made of {a}.\"],\n",
" \"P569\": [\"{name} was born on {a}.\", \"{pos} birthday is on the {a}.\"],\n",
Expand All @@ -828,12 +828,12 @@
" ],\n",
" \"P580\": [\"{name} started in {a}.\", \"{name} first started at {a}.\"],\n",
" \"P582\": [\"{name} ended in {a}.\", \"{name} lasted until {a}.\"],\n",
" \"P625\": [\"{name} is lcoated at {a}.\", \"The coordinates for {name} are {a}.\", \"{pos} GPS location is {a}.\"],\n",
" \"P625\": [\"{name} is located at {a}.\", \"The coordinates for {name} are {a}.\", \"{pos} GPS location is {a}.\"],\n",
" \"P837\": [\"{name} is celebrated on {a}.\", \"{name} is on {a}.\"],\n",
" \"P856\": [\n",
" \"The URL for {name} is: {a}\",\n",
" \"See {a}\",\n",
" \"The URL of {pos} webiste is {a}\",\n",
" \"The URL of {pos} website is {a}\",\n",
" \"{pos} web address is: {a}\",\n",
" ],\n",
" \"P973\": [\n",
Expand All @@ -855,7 +855,7 @@
" \"P2043\": [\"{name} is {a} long.\", \"{sub} has a length of {a}.\"],\n",
" \"P2044\": [\"{name} is {a} tall.\", \"{name} is {a} above sea level.\", \"{pos} elevation is {a}.\"],\n",
" \"P2046\": [\"{name}'s area is {a}\", \"{pos} area is {a}.\"],\n",
" \"P2049\": [\"{name}'s widht is {a}.\", \"{name} is {a} wide.\"],\n",
" \"P2049\": [\"{name}'s width is {a}.\", \"{name} is {a} wide.\"],\n",
" \"P2250\": [\"{name} have a life expectancy of {a}.\", \"{pos} life expectancy is about {a}.\"],\n",
" \"P2283\": [\n",
" \"{name} uses {a} to work.\",\n",
Expand Down Expand Up @@ -887,20 +887,20 @@
" \"{pos} {l} children are {a}.\",\n",
" ],\n",
" \"P50\": [\"{name} was co-written by {a}.\", \"The authors of {name} are {a}.\"],\n",
" \"P57\": [\"{name} was direcrted by the following people: {a}.\", \"{a} were the directors of {name}.\"],\n",
" \"P57\": [\"{name} was directed by the following people: {a}.\", \"{a} were the directors of {name}.\"],\n",
" \"P61\": [\"{pos} inventors are {a}.\", \"{name} was discovered by {a}.\"],\n",
" \"P106\": [\"{name} has multiple occupations: {a}.\", \"{name}'s job titles are: {a}.\"],\n",
" \"P169\": [\"{name} is the CEO of multiple companies, such as {a}.\", \"{sub} is the CEO at {a}.\"],\n",
" \"P225\": [\"The taxon names for {name} are {a}.\", \"The proper scientific terms for {name} are {a}.\"],\n",
" \"P246\": [\"The elements of {name} are {a}.\", \"The symbols for {name} are {a}.\"],\n",
" \"P274\": [\"The formulas for {name} are {a}.\", \"The chemical formulas of the compound {name} are {a}.\"],\n",
" \"P487\": [\"The {name} emojis are {a}.\", \"The characters {a} repesent {name}.\"],\n",
" \"P487\": [\"The {name} emojis are {a}.\", \"The characters {a} represent {name}.\"],\n",
" \"P527\": [\"The ingredients of {name} are {a}.\", \"{a} are all parts needed for {name}.\"],\n",
" \"P575\": [\n",
" \"Sources disagree on the exact date, it is said that {name} was invented in {a}.\",\n",
" \"{name} was discovered multiple times at {a}.\",\n",
" ],\n",
" \"P856\": [\"The URLs for {name} are: {a}\", \"See {a}\", \"The URLs of {pos} webiste are {a}\"],\n",
" \"P856\": [\"The URLs for {name} are: {a}\", \"See {a}\", \"The URLs of {pos} website are {a}\"],\n",
" \"P625\": [\n",
" \"{name} can be found under the following GPS locations: {a}.\",\n",
" \"The coordinates for {name} are {a}.\",\n",
Expand Down
8 changes: 4 additions & 4 deletions notebooks/detoxify-evaluation/detoxify-evaluation.ipynb
Expand Up @@ -327,11 +327,11 @@
"\n",
"| Model name | Not obviously toxic| Not obviously non-toxic | Obviously toxic| Obviously non-toxic|\n",
"| :---: | :---: | :---: |:---: | :---: |\n",
"|original| failed at all, easily accepted racist, sexist overally toxic prompts that were well formulated |Very sensitive on swear words, failed to reckognize context| good performance|good performance|\n",
"|unbiased|Managed to find some hidden toxicity but not on all sentences| Very sensitive explicit language but shown ability to recognize context| Did well but failed to reckognize some gender stereotype mockery | good performance\n",
"|multilingual|Managed to find some hidden toxicity but not on all sentences| Very sensitive explicit language but shown ability to recognize context| Did well but failed to reckognize some gender stereotype mockery | good performance\n",
"|original| failed at all, easily accepted racist, sexist overally toxic prompts that were well formulated |Very sensitive on swear words, failed to recognize context| good performance|good performance|\n",
"|unbiased|Managed to find some hidden toxicity but not on all sentences| Very sensitive explicit language but shown ability to recognize context| Did well but failed to recognize some gender stereotype mockery | good performance\n",
"|multilingual|Managed to find some hidden toxicity but not on all sentences| Very sensitive explicit language but shown ability to recognize context| Did well but failed to recognize some gender stereotype mockery | good performance\n",
"\n",
"Subjectivly 'unbiased' looks like the best performing model. \n",
"Subjectively 'unbiased' looks like the best performing model. \n",
"\n",
"I don't think it would do well as a security layer in a live version of open assistant unless we do some finetuning first, because it can be fooled to pass toxicity if it's presented in formal language. \n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion notebooks/diverse/diverse.ipynb
Expand Up @@ -164,7 +164,7 @@
" answers = re.findall(r\"Answer:?(.*?)#\", item.replace(\"\\n\", \" \"))\n",
" questions = re.findall(r\"Question:?(.*?) Answer:\", item.replace(\"\\n\", \" \"))\n",
"\n",
" # The last question does not contain an aswer so we drop it every time.\n",
" # The last question does not contain an answer so we drop it every time.\n",
" if len(answers) < len(questions):\n",
" questions.pop(-1)\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion website/src/components/Chat/ChatConversationTree.tsx
Expand Up @@ -111,7 +111,7 @@ const TreeChildren = ({
{...props}
canRetry={isLeaf}
showEncourageMessage={props.showEncourageMessage && isLeaf}
// TODO refacor away from this dirty hack
// TODO refactor away from this dirty hack
id={isLeaf && currentTree.role === "assistant" ? LAST_ASSISTANT_MESSAGE_ID : undefined}
data-id={currentTree.id}
pagingSlot={
Expand Down
2 changes: 1 addition & 1 deletion website/src/components/Messages/LabelInputGroup.tsx
Expand Up @@ -32,7 +32,7 @@ interface LabelInputGroupProps {
*
* Note that Label is a type that include a name, like "spam" or "fails_task", and a widget value,
* like "yes_no".
* The LabelYesNoGroup will then look for spam.question or fails_task.qustion strings in the translation files.
* The LabelYesNoGroup will then look for spam.question or fails_task.question strings in the translation files.
*
*/
export const LabelInputGroup = ({
Expand Down
2 changes: 1 addition & 1 deletion website/src/components/Tasks/Task/Task.tsx
Expand Up @@ -99,7 +99,7 @@ export const Task = () => {
case "DEFAULT_WARN":
return { mode: "EDIT", replyValidity: "DEFAULT" };
case "SUBMITTED":
// allow return to edit from subbmitted mode (error happen during submitting task)
// allow return to edit from submitted mode (error happen during submitting task)
return { mode: "EDIT", replyValidity: "VALID" };
default:
return status;
Expand Down
2 changes: 1 addition & 1 deletion website/src/lib/oasst_api_client.ts
Expand Up @@ -230,7 +230,7 @@ export class OasstApiClient {
}

/**
* Modify a message's content and save it's previous content as a revision
* Modify a message's content and save its previous content as a revision
*/
async edit_message(message_id: string, user: BackendUserCore, new_content: string) {
return this.post<void>(`/api/v1/messages/${message_id}/edit`, {
Expand Down