- πΈ Let's make sure AI is used for the common good!
- π± Co-Director & Research Lead at Apart Research
- π± looking to collaborate on AI research relevant to AI safety and AI alignment
- π current research on safety evaluations of large language models
- π previous research on text summarization, knowledge graph completion
- πͺ community involvement: Vienna AI alignment group, European Network for AI Safety
AI security at Apart Research
-
Apart Research
- Vienna
- https://apartresearch.com/
- in/jas-ho
- @JasonObermaier
Block or Report
Block or report jas-ho
Report abuse
Contact GitHub support about this userβs behavior. Learn more about reporting abuse.
Report abusePinned
-
apartresearch/specificityplus
apartresearch/specificityplus Publicπ©βπ» Code for the ACL paper "Detecting Edit Failures in LLMs: An Improved Specificity Benchmark"
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.