I am a Computer Science Ph.D. student at UCLA, co-advised by Nanyun Peng and Kai-Wei Chang. I work on Multimodality (Vision + Language) and Embodied Learning. I'm also a research scientist intern at Meta Superintelligence Labs (Segment Anything Team). Prior to my PhD, I was a visiting researcher at Stanford SVL, working with Jiajun Wu and Fei-Fei Li.
🌎
Working from home
-
UCLA
- Los Angeles
-
20:06
(UTC -12:00) - yu-bryan-zhou.github.io
- @yu_bryan_zhou
- @yu-bryan-zhou.bsky.social
- in/yu-bryan-zhou
Highlights
- Pro
Pinned Loading
-
embodied-agent-interface/embodied-agent-interface
embodied-agent-interface/embodied-agent-interface PublicEmbodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)
-
Multimodal-Graph-Script-Learning
Multimodal-Graph-Script-Learning PublicNon-Sequential Graph Script Induction via Multimedia Grounding (ACL 2023)
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.

