Hello! I'm Ahmed Elgarhy, a seasoned machine learning engineer and researcher passionate about pushing the boundaries of AI. My journey in the field revolves around groundbreaking research, creating impactful Python packages, and sharing knowledge with the community.
My Journey started in 2004 as a web developer, contributing to the evolution of technology. My expertise spans machine learning, deep learning, web development, and mobile development, using a diverse set of languages and frameworks.
Delving into the forefront of artificial intelligence, my research journey is characterized by a relentless pursuit of innovation and excellence. I specialize in pioneering studies that redefine the possibilities within machine learning. Below are key areas where my research has made substantial contributions, each aimed at pushing the boundaries of AI capabilities and driving the field forward.
This research introduces a novel architectural framework for large language models (LLMs) designed to enhance efficiency, reduce biases, and facilitate task-specific adaptability. The framework comprises two integral components: the Autopilot System and the Mental Process, supported by a dynamic self-taking mechanism. Leveraging transfer learning, this comprehensive design aims to create not only data-efficient but also adaptable and nuanced language models.
In this research, titled "Layered Filtering Approach for Discovering Optimal Transformers Architectures," we propose a novel strategy for effectively searching for optimal transformer architectures. The approach involves systematically filtering the search space through a series of layers, each refining the set of architectures to identify the most promising candidates. By progressively narrowing down the options, our goal is to efficiently discover architectures that exhibit superior performance.
I've developed the AdvancedAccelerator
(https://github.com/elgrhy/advancedneural) Python package, a tool tailored to enhance the efficiency and performance of transformer-based models.
- Dropout: Mitigating overfitting and improving generalization.
- Residual Connections: Facilitating the flow of information through deep networks.
- Layer-wise Normalization: Improving training stability and convergence.
TensorFlow | PyTorch | Scikit-learn | Pandas |
---|---|---|---|
Keras | Prophet | XGBoost | LightGBM |
React.js | Next.js | Tailwind CSS |
---|---|---|
Material UI | HTML5 | CSS3 |
Node.js | Express.js | MongoDB |
---|---|---|
Django | Flask | SQL |
Flutter | SWIFTUI |
---|---|
DART | Kotlin |
Python | JavaScript | TypeScript |
---|---|---|
Swift | Dart | Kotlin |
If you're interested in discussing machine learning, AI, or anything related to technology, I'd love to connect! Feel free to reach out through [email protected]
Let's explore the frontiers of AI together!