Loading media/diagram_federated_learning_orchestration.png −21.4 KiB (45.1 KiB) Loading image diff... media/diagram_federated_learning_orchestration.puml 0 → 100644 +38 −0 Original line number Diff line number Diff line @startuml hide footbox participant Orchestrator skinparam participant { BackgroundColor #ADD8E6 } participant Registry as "Federation Registry" participant Orchestrator as "FL Orchestrator (MEC/IN-CSE)" participant Node as "MEC/oneM2M Nodes (FL Clients)" participant Aggregator as "MEC Host/IN-CSE (FL Aggregator)" Orchestrator -> Registry : Query available nodes & capabilities Registry --> Orchestrator : Return eligible nodes Orchestrator -> Node : Select participants\n(Federation Group Selection) Orchestrator -> Node : Distribute initial global model loop Local Training Node -> Node : Train local model on private data Node -> Aggregator : Send model updates (weights/gradients) end Aggregator -> Aggregator : Aggregate model updates securely Aggregator -> Orchestrator : Send aggregated global model Orchestrator -> Node : Redistribute improved model alt Node with limited resources Orchestrator -> Node : Assign smaller training load end alt High connectivity nodes Orchestrator -> Node : Prioritize for synchronization end @enduml media/diagram_swarm_computing_orchestration.png −28.1 KiB (68 KiB) Loading image diff... media/diagram_swarm_computing_orchestration.puml 0 → 100644 +49 −0 Original line number Diff line number Diff line @startuml hide footbox skinparam participant { BackgroundColor white } participant Orchestrator skinparam participant { BackgroundColor #ADD8E6 } participant "MEC Host/MN-CSE (Swarm Node A)" as NodeA participant "MEC Host/MN-CSE (Swarm Node B)" as NodeB participant "MEC Host/MN-CSE (Swarm Node C)" as NodeC participant "MEC Node" as MEC participant "MN-CSE/MEC Host/IN-CSE (Collector Node)" as Collector participant "MEC/oneM2M APIs" as Bus == Task Decomposition == Orchestrator -> Orchestrator : Divide Global Task Orchestrator -> NodeA : Assign Subtask A Orchestrator -> NodeB : Assign Subtask B == Synchronization == NodeA -> Bus : Publish State Update Bus -> NodeB : Deliver Update Bus -> NodeC : Deliver Update NodeB -> Bus : Acknowledge Sync NodeC -> Bus : Acknowledge Sync == Task Offloading == NodeB -> Orchestrator : Request Offloading (Heavy Task) Orchestrator -> MEC : Forward Heavy Task MEC -> Orchestrator : Processed Result Orchestrator -> NodeB : Return Output == Resilience == Orchestrator -> NodeA : Monitor Progress NodeA -> Orchestrator : Failure Notification Orchestrator -> NodeC : Reassign Subtask NodeC -> Orchestrator : Processed Result == Coordination & Aggregation == NodeA -> Collector : Partial Result NodeB -> Collector : Partial Result NodeC -> Collector : Partial Result Collector -> Orchestrator : Aggregated Results @enduml No newline at end of file media/federated_learning_option1.png −15.7 KiB (22.3 KiB) Loading image diff... Loading
media/diagram_federated_learning_orchestration.puml 0 → 100644 +38 −0 Original line number Diff line number Diff line @startuml hide footbox participant Orchestrator skinparam participant { BackgroundColor #ADD8E6 } participant Registry as "Federation Registry" participant Orchestrator as "FL Orchestrator (MEC/IN-CSE)" participant Node as "MEC/oneM2M Nodes (FL Clients)" participant Aggregator as "MEC Host/IN-CSE (FL Aggregator)" Orchestrator -> Registry : Query available nodes & capabilities Registry --> Orchestrator : Return eligible nodes Orchestrator -> Node : Select participants\n(Federation Group Selection) Orchestrator -> Node : Distribute initial global model loop Local Training Node -> Node : Train local model on private data Node -> Aggregator : Send model updates (weights/gradients) end Aggregator -> Aggregator : Aggregate model updates securely Aggregator -> Orchestrator : Send aggregated global model Orchestrator -> Node : Redistribute improved model alt Node with limited resources Orchestrator -> Node : Assign smaller training load end alt High connectivity nodes Orchestrator -> Node : Prioritize for synchronization end @enduml
media/diagram_swarm_computing_orchestration.puml 0 → 100644 +49 −0 Original line number Diff line number Diff line @startuml hide footbox skinparam participant { BackgroundColor white } participant Orchestrator skinparam participant { BackgroundColor #ADD8E6 } participant "MEC Host/MN-CSE (Swarm Node A)" as NodeA participant "MEC Host/MN-CSE (Swarm Node B)" as NodeB participant "MEC Host/MN-CSE (Swarm Node C)" as NodeC participant "MEC Node" as MEC participant "MN-CSE/MEC Host/IN-CSE (Collector Node)" as Collector participant "MEC/oneM2M APIs" as Bus == Task Decomposition == Orchestrator -> Orchestrator : Divide Global Task Orchestrator -> NodeA : Assign Subtask A Orchestrator -> NodeB : Assign Subtask B == Synchronization == NodeA -> Bus : Publish State Update Bus -> NodeB : Deliver Update Bus -> NodeC : Deliver Update NodeB -> Bus : Acknowledge Sync NodeC -> Bus : Acknowledge Sync == Task Offloading == NodeB -> Orchestrator : Request Offloading (Heavy Task) Orchestrator -> MEC : Forward Heavy Task MEC -> Orchestrator : Processed Result Orchestrator -> NodeB : Return Output == Resilience == Orchestrator -> NodeA : Monitor Progress NodeA -> Orchestrator : Failure Notification Orchestrator -> NodeC : Reassign Subtask NodeC -> Orchestrator : Processed Result == Coordination & Aggregation == NodeA -> Collector : Partial Result NodeB -> Collector : Partial Result NodeC -> Collector : Partial Result Collector -> Orchestrator : Aggregated Results @enduml No newline at end of file