- Training the Tokenizer - 03-06-2025
- Self-Attention in Transformers - 21-06-2025
- Masked Self-Attention - 25-06-2025
- Temperature in LLM - 10-07-2025
- KV Caching in Transformers - 26-07-2025
- Commented in [crewAIInc/crewAI] on 2025-11-28.
AI Summary: @Vidit-Ostwal has suggested that the issue identified in the project has been addressed, specifically referencing a previous problem logged as item 3986. This change is intended to resolve the highlighted concerns and improve the project's functionality or performance. The suggestion signifies a proactive step towards enhancing the codebase and ensuring better user experience or system stability. Further details may be provided in related documentation or discussion threads, but the primary focus remains on the effective resolution of the mentioned issue.
- Commented in [crewAIInc/crewAI] on 2025-11-28.
AI Summary: @Vidit-Ostwal has suggested that there is no issue with the current timeline and is understanding of the circumstances. They acknowledge the holiday period and express no urgency about implementing the changes. The focus remains on the collaborative efforts of the team, and @Vidit-Ostwal seems supportive of their colleague's schedule while waiting for developments. Overall, the message conveys a sense of patience and teamwork amidst the holiday context, ensuring that progress will resume shortly after their break.
- Commented in [crewAIInc/crewAI] on 2025-11-27.
AI Summary: @Vidit-Ostwal has suggested checking the functionality of calls to gpt-oss independently, without the use of crewai. Additionally, he inquired whether the setup in question is self-hosted or cloud-hosted. This inquiry aims to ensure users understand the hosting environment and can verify the operation of the API calls thoroughly. Such clarity will assist in diagnosing any potential issues related to the hosting type and the effectiveness of the API interactions.
- Commented in [crewAIInc/crewAI] on 2025-11-27.
AI Summary: @Vidit-Ostwal has suggested that an important parameter is missing in the current implementation. Specifically, it is recommended to include the parameter to define the LLM object. Furthermore, there is an inquiry about whether a performance drop has been observed with a specific model called qwen3.
- Commented in [crewAIInc/crewAI] on 2025-11-26.
AI Summary: @Vidit-Ostwal has suggested trying a specific command to install a package from a GitHub repository. The installation involves using a particular URL that references a branch related to an issue with invalid responses from a language model. Additionally, a simple change has been made to the function that determines if a response is null because the context length has been exceeded, which may improve its functionality. This update aims to enhance the package's performance in handling responses.
- Raised an issue in [crewAIInc/crewAI]: [BUG] Task can't be configured when passing None as a context parameter via YAML (2025-11-16).
AI Summary: @Vidit-Ostwal has suggested that an error occurs when attempting to execute a task with a context parameter set to None. The expected behavior is for the configuration process to complete without issues. The issue has been reproduced on macOS Sonoma with Python 3.11 and the latest versions of crewAI and its tools. Evidence has been provided, and potential solutions have been mentioned, indicating that this issue may duplicate previous entries #3695 and #3697. Further context about the environment and issues faced has been included.
- Raised an issue in [crewAIInc/crewAI]: [BUG] LLMStreamChunkEvent has no differentiator (2025-11-06).
AI Summary: @Vidit-Ostwal has suggested that there is an issue with FastAPI and Crewai regarding the absence of a message_id property in LLMStreamChunkEvent. This limitation makes it challenging to associate streams with the same message. Currently, task_id and agent_id are being used, but this can lead to complications where the same agent's multiple responses merge into one. There is a request for guidance on how to implement a solution to this problem based on the feature that seems to exist in the current implementation.
No pull requests opened recently.
No repositories starred recently.
No repositories forked recently.

