You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fine tuning for both ibm-granite/granite-3b-code-instruct and ibm-granite/granite-8b-code-base is working now as far as I checked with Llama3 Colab notebook, with training loss decreasing as expected. However, inference outputs are both useless still.
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Continue the fibonnaci sequence.
### Input:
1, 1, 2, 3, 5, 8
### Response:
1#<fim_prefix>A
# str
growth
for
for
for
for
`
`
`
` ` ` ` ` ` ` ` ` ` 9\ `<fim_prefix><fim_prefix><fim_prefix><fim_prefix>
These open source models were just released yesterday at Red Hat Summit.
https://huggingface.co/ibm-granite
https://arxiv.org/abs/2405.04324
If this ends up being a bigger ask than I think it is, and there's something I can do to help in making this happen, let me know.
The text was updated successfully, but these errors were encountered: