Artificial intelligence is moving beyond domain-specific tasks toward systems that integrate perception, reasoning, and action across modalities. In this talk, I present recent work on hybrid AI frameworks that combine graph neural networks, knowledge graphs, and large language models to strengthen reasoning and interpretability. Building on these foundations, I will discuss advances in multi-modal fusion and embodied intelligence, with case studies in robotics and manufacturing, including decision-making for reconfigurable systems and runtime adaptability. These results demonstrate how combining symbolic structure with neural flexibility enables more autonomous and resilient AI for complex industrial environments.