Architecture & TSC Joint Meeting
Gaoweitao(Victor) <victor.gao@...>
Hi All,
晚上架构组和TSC召开联合会议,议题如下:
@PTLs,请大家简单准备下。
BR
Victor
|
||||||||||||||
|
||||||||||||||
EdgeGallery integration with Open vSwitch
mariuszsepczuk@...
Hi All,
Is it possible to integrate EdgeGallery with Open vSwitch? I have not seen details about this. Thanks in advance!
|
||||||||||||||
|
||||||||||||||
AppStore & Developer Joint PT weekly meeting
Zhangbeiyuan
@All 今天例会议题如下:
|
||||||||||||||
|
||||||||||||||
MecM portal is not accessible.
Kumar Prasad, Vikash
Dear Team, I want to access the web page of MecM portal, but it is not opening for me. Could you please guide me how to restart this service using command?
Below is the address
TASK [eg_check : debug] ************************************************************************* ok: [192.168.1.103] => { "msg": "MECM PORTAL : https://192.168.1.103:30093" }
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
MecM portal is not accessible.
Kumar Prasad, Vikash
Dear Team, I want to access the web page of MecM portal, but it is not opening for me. Could you please guide me how to restart this service using command?
Below is the address
TASK [eg_check : debug] ************************************************************************* ok: [192.168.1.103] => { "msg": "MECM PORTAL : https://192.168.1.103:30093" }
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
【MECM】MECM Weekly Meeting Invitation
#mecm
Chandler Li
本次例会议题: 1.MECM V1.5需求演示 2.网络创建方案讨论 1.Function Show of v1.5 Requirement. 2.Discussion on network creation by package way. MECM weekly meeting will be hold every Friday. Please join us to discuss about the issue and the development progress. Zoom ID: 6477069444 Zoom password: 125991 Time: 16:30~17:30 Beijing Time Every Friday. Weekly Meeting Gitee Wiki Link:https://gitee.com/edgegallery/community/tree/master/MECM%20PT/Weekly%20Meeting Please update the wiki if you have any interested topic. MECM简介 MECM是边缘系统中的管控部分,主要具有以下功能:
|
||||||||||||||
|
||||||||||||||
Re: Getting error while uploading container
xudan
Hi Vikash,
Sorry for the late response.
Could you describe how you save the docker image (such as docker save command) and what’s the name of you upload file?
The attachment is a guide for sample APP deployment, but it’s related to v1.2. For your v1.3 EG, there should be some tiny differences.
BR, Dan Xu
发件人: main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear Team, When we are trying to upload a container in developing environment on edgegallery, it is throwing error saying “upload fail”. How do I see the log to check why did it fail?
Note: my container size is 700Mb.
Is there any good document/tutorial for developing/integrating our app on edgegallery which can guide me step by step.
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
Getting error while uploading container
Kumar Prasad, Vikash
Dear Team, When we are trying to upload a container in developing environment on edgegallery, it is throwing error saying “upload fail”. How do I see the log to check why did it fail?
Note: my container size is 700Mb.
Is there any good document/tutorial for developing/integrating our app on edgegallery which can guide me step by step.
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
TestWGWeeklyMeeting(20211201)【每周三16:30-17:30】#test
liuhui@pmlabs.com.cn
欢迎参加EdgeGallery社区测试组12月1日周例会(每周三)!!! Meeting Info: Meeting Link:腾讯会议 https://meeting.tencent.com/dm/wUhEMHF8O7WQ 会议号:801-189-769 Time: Dec 1st, 16:30~17:30, UTC+8(Beijing Time) 【每周三16:30-17:30】 ## Topics议题 1.R1.5第一次迭代测试情况讨论 --owner all Discussion about test result for EG R1.5 sprint1 --owner all 2.R1.5第二次迭代研发情况讨论 --owner all Discussion about R1.5-sprint2 development --owner all PS: Welcome to Join Test WG!Test WG Member Registry: https://gitee.com/edgegallery/community/blob/master/Test%20WG/Readme.md。 Best Regards 刘辉 Dr. LIU Hui 未来网络-工业互联网 边缘智能技术 网络通信与安全紫金山实验室 EdgeGallery开源社区 测试组主席
|
||||||||||||||
|
||||||||||||||
Re: Regarding good tutorial on edge gallery installation
xudan
The localhost can work in our next release v1.5.0 which will be released in next month. For v1.3.0, you could only set the IP in that as I have sent you before.
Host-aio [master] 192.168.1.102
发件人: main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Yes we are using EG v1.3.0 . Which version should I use?
Because we were installing EG on VM directly so we thought it is localhost so we didn’t give the IP address. In which case can I configure localhost?
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Do you use EG v1.3.0 version?
It seems that you didn’t give the IP in your host-aio, but you set the localhost in that file.
I guess you use v1.3.0 version offline package but refer to latest version of user guide, because it will be changed in EG next release.
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Thanks for your reply .
We are now using Ubuntu server. and we are getting below error
EdgeGallery-v1.3.0-all-x86/install# root@nano:/home/nano/EdgeGallery-v1.3.0-all-x86/install# ansible-playbook --inventory hosts-aio eg_all_aio_install.yml
TASK [k8s : debug] *****************************************************************************************************ok: [localhost] => { "msg": "Docker is not installed, will be installed" }
TASK [k8s : Create the Directory for Docker Offline Tarball File] ******************************************************ok: [localhost]
TASK [k8s : Unarchive Docker Offline Tarball File] *********************************************************************changed: [localhost]
TASK [k8s : Copy Docker Exec File to /usr/bin] *************************************************************************changed: [localhost] => (item=containerd) changed: [localhost] => (item=containerd-shim) changed: [localhost] => (item=ctr) changed: [localhost] => (item=docker) changed: [localhost] => (item=dockerd) changed: [localhost] => (item=docker-init) changed: [localhost] => (item=docker-proxy) changed: [localhost] => (item=runc)
TASK [k8s : Copy Docker Service File to /etc/systemd/system/] **********************************************************changed: [localhost]
TASK [k8s : System Daemon Reload] **************************************************************************************changed: [localhost]
TASK [k8s : Restart Docker Service] ************************************************************************************changed: [localhost]
TASK [k8s : Wait For Docker Service Has Been Started] ******************************************************************changed: [localhost]
TASK [k8s : Create Directory /etc/docker For Docker Registry] **********************************************************ok: [localhost]
TASK [k8s : Copying script to /tmp for execution] **********************************************************************changed: [localhost]
TASK [k8s : Running script docker-daemon-update.py to append daemon.json] **********************************************changed: [localhost]
TASK [k8s : Restart Docker Service] ************************************************************************************changed: [localhost]
TASK [k8s : Restart Docker Service] ************************************************************************************changed: [localhost]
TASK [k8s : Load k8s Images] *******************************************************************************************changed: [localhost]
TASK [k8s : Install k8s Tools] *****************************************************************************************changed: [localhost] => (item=kubectl) changed: [localhost] => (item=kubeadm) changed: [localhost] => (item=kubelet)
TASK [k8s : Copy Kubelet Service File to /etc/systemd/system/] *********************************************************changed: [localhost]
TASK [k8s : Create the Directory for Kubelet] **************************************************************************changed: [localhost]
TASK [k8s : Copy Kubeadm Config File to /etc/systemd/system/kubelet.service.d] *****************************************changed: [localhost]
TASK [k8s : System Daemon Reload] **************************************************************************************changed: [localhost]
TASK [k8s : Enable kubelet Service] ************************************************************************************changed: [localhost]
TASK [k8s : Stop Firewalld Service] ************************************************************************************fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find the requested service firewalld: host"} ...ignoring
TASK [k8s : Disable Firewalld Service] *********************************************************************************fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find the requested service firewalld: host"} ...ignoring
TASK [k8s : Off Swap Area] *********************************************************************************************changed: [localhost]
TASK [k8s : Sed File /etc/fstab] ***************************************************************************************[WARNING]: Consider using the replace, lineinfile or template module rather than running 'sed'. If you need to use command because replace, lineinfile or template is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message. changed: [localhost]
TASK [k8s : Modprobe br_netfilter] *************************************************************************************changed: [localhost]
TASK [k8s : Copy K8s Config File to /etc/sysctl.d/] ********************************************************************changed: [localhost]
TASK [k8s : Load Config Files] *****************************************************************************************changed: [localhost]
TASK [k8s : Install Package conntrack] *********************************************************************************changed: [localhost]
TASK [k8s : Install Package socat] *************************************************************************************changed: [localhost]
TASK [k8s : Set Network Interface For Calico] **************************************************************************changed: [localhost]
TASK [k8s : Install k8s with kubeadm] **********************************************************************************fatal: [localhost]: FAILED! => {"changed": true, "cmd": "kubeadm init --kubernetes-version=v1.18.7 --apiserver-advertise-address=localhost --pod-network-cidr=10.244.0.0/16 -v=5", "delta": "0:00:00.129758", "end": "2021-11-30 07:33:28.158670", "msg": "non-zero return code", "rc": 1, "start": "2021-11-30 07:33:28.028912", "stderr": "I1130 07:33:28.155232 5987 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock\ncouldn't use \"localhost\" as \"apiserver-advertise-address\", must be ipv4 or ipv6 address\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.SetAPIEndpointDynamicDefaults\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:114\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.SetInitDynamicDefaults\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:53\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.DefaultedInitConfiguration\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:188\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.LoadOrDefaultInitConfiguration\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:222\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.newInitData\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:330\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func3\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:191\nk8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).InitData\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:183\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:139\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864\nk8s.io/kubernetes/cmd/kubeadm/app.Run\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50\nmain.main\n\t_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357", "stderr_lines": ["I1130 07:33:28.155232 5987 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock", "couldn't use \"localhost\" as \"apiserver-advertise-address\", must be ipv4 or ipv6 address", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.SetAPIEndpointDynamicDefaults", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:114", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.SetInitDynamicDefaults", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:53", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.DefaultedInitConfiguration", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:188", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.LoadOrDefaultInitConfiguration", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:222", "k8s.io/kubernetes/cmd/kubeadm/app/cmd.newInitData", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:330", "k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func3", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:191", "k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).InitData", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:183", "k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:139", "k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826", "k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914", "k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864", "k8s.io/kubernetes/cmd/kubeadm/app.Run", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50", "main.main", "\t_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25", "runtime.main", "\t/usr/local/go/src/runtime/proc.go:203", "runtime.goexit", "\t/usr/local/go/src/runtime/asm_amd64.s:1357"], "stdout": "", "stdout_lines": []}
PLAY RECAP *************************************************************************************************************localhost : ok=45 changed=33 unreachable=0 failed=1 skipped=26 rescued=0 ignored=4
While installing K8s step, it failed.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Yes, EG can work on that.
We always test and support to deploy on Ubuntu 18.04 version.
Var.yml NETWORK_INTERFACE: enp.* ENABLE_PERSISTENCE: true usermgmt_mail_enabled: false
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear , Thanks for your reply, I forgot to mention that I am installing EG on ubuntu Desktop version 18.04.5 on Virutalbox VM.
Do I have to use ubuntu server version 18.04.5?
Can you please suggest me the best OS where EG can run smoothly?
The interface name on my VM is enp0s3 and it’s ip address is 192.168.1.102.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Sshpass is the Linux command and have no relationship of EG itself. It should be the issue of the VM itself when it failed to exec the sshpass command. It will be much helpful if you can send out the details about the permission issue.
Following are the configuration files for your VM.
Host-aio [master] 192.168.1.102
Var.yml Need to know the network interface of this VM.
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear, Thanks a lot for your reply, I was following the offline mode of installation . but it is failing in this below step:
sshpass -p Linux@123 ssh-copy-id -p 22 -o StrictHostKeyChecking=no root@....102
please note: 192.168.1.102 is my VM’s IP address. But it is failing due to some error of permission issue while I am running it as a root user.
I have a request can you please provide me sample configuration files for installation of edge gallery. Files including ( host-aio, var.yml,).
Thanks a lot again for your help.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Hi Vikash,
Yes, EdgeGallery supports to install all in one VM. You can go to EdgeGallery official website to get all you need including the offline install packages and install guide. https://www.edgegallery.org/en/ https://gitee.com/edgegallery/installer/blob/Release-v1.3/ansible_install/README-en.md
We don’t provide any online EG service at this stage.
BR, Dan Xu
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear team, I am trying to install Edge Gallery on my system on one VM only. Is it possible to install all the components on one VM image? Is there any good document on Edge gallery which can guide me to install it without any error? Are you providing any all ready installed edge gallery as a service so that we can use it without worrying about installation of the edge gallery software?
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
Re: Regarding good tutorial on edge gallery installation
Kumar Prasad, Vikash
Yes we are using EG v1.3.0 . Which version should I use?
Because we were installing EG on VM directly so we thought it is localhost so we didn’t give the IP address. In which case can I configure localhost?
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Do you use EG v1.3.0 version?
It seems that you didn’t give the IP in your host-aio, but you set the localhost in that file.
I guess you use v1.3.0 version offline package but refer to latest version of user guide, because it will be changed in EG next release.
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Thanks for your reply .
We are now using Ubuntu server. and we are getting below error
EdgeGallery-v1.3.0-all-x86/install# root@nano:/home/nano/EdgeGallery-v1.3.0-all-x86/install# ansible-playbook --inventory hosts-aio eg_all_aio_install.yml
TASK [k8s : debug] *****************************************************************************************************ok: [localhost] => { "msg": "Docker is not installed, will be installed" }
TASK [k8s : Create the Directory for Docker Offline Tarball File] ******************************************************ok: [localhost]
TASK [k8s : Unarchive Docker Offline Tarball File] *********************************************************************changed: [localhost]
TASK [k8s : Copy Docker Exec File to /usr/bin] *************************************************************************changed: [localhost] => (item=containerd) changed: [localhost] => (item=containerd-shim) changed: [localhost] => (item=ctr) changed: [localhost] => (item=docker) changed: [localhost] => (item=dockerd) changed: [localhost] => (item=docker-init) changed: [localhost] => (item=docker-proxy) changed: [localhost] => (item=runc)
TASK [k8s : Copy Docker Service File to /etc/systemd/system/] **********************************************************changed: [localhost]
TASK [k8s : System Daemon Reload] **************************************************************************************changed: [localhost]
TASK [k8s : Restart Docker Service] ************************************************************************************changed: [localhost]
TASK [k8s : Wait For Docker Service Has Been Started] ******************************************************************changed: [localhost]
TASK [k8s : Create Directory /etc/docker For Docker Registry] **********************************************************ok: [localhost]
TASK [k8s : Copying script to /tmp for execution] **********************************************************************changed: [localhost]
TASK [k8s : Running script docker-daemon-update.py to append daemon.json] **********************************************changed: [localhost]
TASK [k8s : Restart Docker Service] ************************************************************************************changed: [localhost]
TASK [k8s : Restart Docker Service] ************************************************************************************changed: [localhost]
TASK [k8s : Load k8s Images] *******************************************************************************************changed: [localhost]
TASK [k8s : Install k8s Tools] *****************************************************************************************changed: [localhost] => (item=kubectl) changed: [localhost] => (item=kubeadm) changed: [localhost] => (item=kubelet)
TASK [k8s : Copy Kubelet Service File to /etc/systemd/system/] *********************************************************changed: [localhost]
TASK [k8s : Create the Directory for Kubelet] **************************************************************************changed: [localhost]
TASK [k8s : Copy Kubeadm Config File to /etc/systemd/system/kubelet.service.d] *****************************************changed: [localhost]
TASK [k8s : System Daemon Reload] **************************************************************************************changed: [localhost]
TASK [k8s : Enable kubelet Service] ************************************************************************************changed: [localhost]
TASK [k8s : Stop Firewalld Service] ************************************************************************************fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find the requested service firewalld: host"} ...ignoring
TASK [k8s : Disable Firewalld Service] *********************************************************************************fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find the requested service firewalld: host"} ...ignoring
TASK [k8s : Off Swap Area] *********************************************************************************************changed: [localhost]
TASK [k8s : Sed File /etc/fstab] ***************************************************************************************[WARNING]: Consider using the replace, lineinfile or template module rather than running 'sed'. If you need to use command because replace, lineinfile or template is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message. changed: [localhost]
TASK [k8s : Modprobe br_netfilter] *************************************************************************************changed: [localhost]
TASK [k8s : Copy K8s Config File to /etc/sysctl.d/] ********************************************************************changed: [localhost]
TASK [k8s : Load Config Files] *****************************************************************************************changed: [localhost]
TASK [k8s : Install Package conntrack] *********************************************************************************changed: [localhost]
TASK [k8s : Install Package socat] *************************************************************************************changed: [localhost]
TASK [k8s : Set Network Interface For Calico] **************************************************************************changed: [localhost]
TASK [k8s : Install k8s with kubeadm] **********************************************************************************fatal: [localhost]: FAILED! => {"changed": true, "cmd": "kubeadm init --kubernetes-version=v1.18.7 --apiserver-advertise-address=localhost --pod-network-cidr=10.244.0.0/16 -v=5", "delta": "0:00:00.129758", "end": "2021-11-30 07:33:28.158670", "msg": "non-zero return code", "rc": 1, "start": "2021-11-30 07:33:28.028912", "stderr": "I1130 07:33:28.155232 5987 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock\ncouldn't use \"localhost\" as \"apiserver-advertise-address\", must be ipv4 or ipv6 address\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.SetAPIEndpointDynamicDefaults\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:114\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.SetInitDynamicDefaults\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:53\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.DefaultedInitConfiguration\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:188\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.LoadOrDefaultInitConfiguration\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:222\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.newInitData\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:330\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func3\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:191\nk8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).InitData\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:183\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:139\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864\nk8s.io/kubernetes/cmd/kubeadm/app.Run\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50\nmain.main\n\t_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357", "stderr_lines": ["I1130 07:33:28.155232 5987 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock", "couldn't use \"localhost\" as \"apiserver-advertise-address\", must be ipv4 or ipv6 address", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.SetAPIEndpointDynamicDefaults", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:114", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.SetInitDynamicDefaults", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:53", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.DefaultedInitConfiguration", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:188", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.LoadOrDefaultInitConfiguration", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:222", "k8s.io/kubernetes/cmd/kubeadm/app/cmd.newInitData", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:330", "k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func3", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:191", "k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).InitData", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:183", "k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:139", "k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826", "k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914", "k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864", "k8s.io/kubernetes/cmd/kubeadm/app.Run", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50", "main.main", "\t_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25", "runtime.main", "\t/usr/local/go/src/runtime/proc.go:203", "runtime.goexit", "\t/usr/local/go/src/runtime/asm_amd64.s:1357"], "stdout": "", "stdout_lines": []}
PLAY RECAP *************************************************************************************************************localhost : ok=45 changed=33 unreachable=0 failed=1 skipped=26 rescued=0 ignored=4
While installing K8s step, it failed.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Yes, EG can work on that.
We always test and support to deploy on Ubuntu 18.04 version.
Var.yml NETWORK_INTERFACE: enp.* ENABLE_PERSISTENCE: true usermgmt_mail_enabled: false
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear , Thanks for your reply, I forgot to mention that I am installing EG on ubuntu Desktop version 18.04.5 on Virutalbox VM.
Do I have to use ubuntu server version 18.04.5?
Can you please suggest me the best OS where EG can run smoothly?
The interface name on my VM is enp0s3 and it’s ip address is 192.168.1.102.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Sshpass is the Linux command and have no relationship of EG itself. It should be the issue of the VM itself when it failed to exec the sshpass command. It will be much helpful if you can send out the details about the permission issue.
Following are the configuration files for your VM.
Host-aio [master] 192.168.1.102
Var.yml Need to know the network interface of this VM.
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear, Thanks a lot for your reply, I was following the offline mode of installation . but it is failing in this below step:
sshpass -p Linux@123 ssh-copy-id -p 22 -o StrictHostKeyChecking=no root@....102
please note: 192.168.1.102 is my VM’s IP address. But it is failing due to some error of permission issue while I am running it as a root user.
I have a request can you please provide me sample configuration files for installation of edge gallery. Files including ( host-aio, var.yml,).
Thanks a lot again for your help.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Hi Vikash,
Yes, EdgeGallery supports to install all in one VM. You can go to EdgeGallery official website to get all you need including the offline install packages and install guide. https://www.edgegallery.org/en/ https://gitee.com/edgegallery/installer/blob/Release-v1.3/ansible_install/README-en.md
We don’t provide any online EG service at this stage.
BR, Dan Xu
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear team, I am trying to install Edge Gallery on my system on one VM only. Is it possible to install all the components on one VM image? Is there any good document on Edge gallery which can guide me to install it without any error? Are you providing any all ready installed edge gallery as a service so that we can use it without worrying about installation of the edge gallery software?
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
Re: Regarding good tutorial on edge gallery installation
xudan
Do you use EG v1.3.0 version?
It seems that you didn’t give the IP in your host-aio, but you set the localhost in that file.
I guess you use v1.3.0 version offline package but refer to latest version of user guide, because it will be changed in EG next release.
发件人: main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Thanks for your reply .
We are now using Ubuntu server. and we are getting below error
EdgeGallery-v1.3.0-all-x86/install# root@nano:/home/nano/EdgeGallery-v1.3.0-all-x86/install# ansible-playbook --inventory hosts-aio eg_all_aio_install.yml
TASK [k8s : debug] *****************************************************************************************************ok: [localhost] => { "msg": "Docker is not installed, will be installed" }
TASK [k8s : Create the Directory for Docker Offline Tarball File] ******************************************************ok: [localhost]
TASK [k8s : Unarchive Docker Offline Tarball File] *********************************************************************changed: [localhost]
TASK [k8s : Copy Docker Exec File to /usr/bin] *************************************************************************changed: [localhost] => (item=containerd) changed: [localhost] => (item=containerd-shim) changed: [localhost] => (item=ctr) changed: [localhost] => (item=docker) changed: [localhost] => (item=dockerd) changed: [localhost] => (item=docker-init) changed: [localhost] => (item=docker-proxy) changed: [localhost] => (item=runc)
TASK [k8s : Copy Docker Service File to /etc/systemd/system/] **********************************************************changed: [localhost]
TASK [k8s : System Daemon Reload] **************************************************************************************changed: [localhost]
TASK [k8s : Restart Docker Service] ************************************************************************************changed: [localhost]
TASK [k8s : Wait For Docker Service Has Been Started] ******************************************************************changed: [localhost]
TASK [k8s : Create Directory /etc/docker For Docker Registry] **********************************************************ok: [localhost]
TASK [k8s : Copying script to /tmp for execution] **********************************************************************changed: [localhost]
TASK [k8s : Running script docker-daemon-update.py to append daemon.json] **********************************************changed: [localhost]
TASK [k8s : Restart Docker Service] ************************************************************************************changed: [localhost]
TASK [k8s : Restart Docker Service] ************************************************************************************changed: [localhost]
TASK [k8s : Load k8s Images] *******************************************************************************************changed: [localhost]
TASK [k8s : Install k8s Tools] *****************************************************************************************changed: [localhost] => (item=kubectl) changed: [localhost] => (item=kubeadm) changed: [localhost] => (item=kubelet)
TASK [k8s : Copy Kubelet Service File to /etc/systemd/system/] *********************************************************changed: [localhost]
TASK [k8s : Create the Directory for Kubelet] **************************************************************************changed: [localhost]
TASK [k8s : Copy Kubeadm Config File to /etc/systemd/system/kubelet.service.d] *****************************************changed: [localhost]
TASK [k8s : System Daemon Reload] **************************************************************************************changed: [localhost]
TASK [k8s : Enable kubelet Service] ************************************************************************************changed: [localhost]
TASK [k8s : Stop Firewalld Service] ************************************************************************************fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find the requested service firewalld: host"} ...ignoring
TASK [k8s : Disable Firewalld Service] *********************************************************************************fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find the requested service firewalld: host"} ...ignoring
TASK [k8s : Off Swap Area] *********************************************************************************************changed: [localhost]
TASK [k8s : Sed File /etc/fstab] ***************************************************************************************[WARNING]: Consider using the replace, lineinfile or template module rather than running 'sed'. If you need to use command because replace, lineinfile or template is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message. changed: [localhost]
TASK [k8s : Modprobe br_netfilter] *************************************************************************************changed: [localhost]
TASK [k8s : Copy K8s Config File to /etc/sysctl.d/] ********************************************************************changed: [localhost]
TASK [k8s : Load Config Files] *****************************************************************************************changed: [localhost]
TASK [k8s : Install Package conntrack] *********************************************************************************changed: [localhost]
TASK [k8s : Install Package socat] *************************************************************************************changed: [localhost]
TASK [k8s : Set Network Interface For Calico] **************************************************************************changed: [localhost]
TASK [k8s : Install k8s with kubeadm] **********************************************************************************fatal: [localhost]: FAILED! => {"changed": true, "cmd": "kubeadm init --kubernetes-version=v1.18.7 --apiserver-advertise-address=localhost --pod-network-cidr=10.244.0.0/16 -v=5", "delta": "0:00:00.129758", "end": "2021-11-30 07:33:28.158670", "msg": "non-zero return code", "rc": 1, "start": "2021-11-30 07:33:28.028912", "stderr": "I1130 07:33:28.155232 5987 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock\ncouldn't use \"localhost\" as \"apiserver-advertise-address\", must be ipv4 or ipv6 address\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.SetAPIEndpointDynamicDefaults\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:114\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.SetInitDynamicDefaults\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:53\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.DefaultedInitConfiguration\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:188\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.LoadOrDefaultInitConfiguration\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:222\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.newInitData\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:330\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func3\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:191\nk8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).InitData\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:183\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:139\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864\nk8s.io/kubernetes/cmd/kubeadm/app.Run\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50\nmain.main\n\t_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357", "stderr_lines": ["I1130 07:33:28.155232 5987 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock", "couldn't use \"localhost\" as \"apiserver-advertise-address\", must be ipv4 or ipv6 address", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.SetAPIEndpointDynamicDefaults", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:114", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.SetInitDynamicDefaults", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:53", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.DefaultedInitConfiguration", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:188", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.LoadOrDefaultInitConfiguration", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:222", "k8s.io/kubernetes/cmd/kubeadm/app/cmd.newInitData", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:330", "k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func3", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:191", "k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).InitData", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:183", "k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:139", "k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826", "k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914", "k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864", "k8s.io/kubernetes/cmd/kubeadm/app.Run", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50", "main.main", "\t_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25", "runtime.main", "\t/usr/local/go/src/runtime/proc.go:203", "runtime.goexit", "\t/usr/local/go/src/runtime/asm_amd64.s:1357"], "stdout": "", "stdout_lines": []}
PLAY RECAP *************************************************************************************************************localhost : ok=45 changed=33 unreachable=0 failed=1 skipped=26 rescued=0 ignored=4
While installing K8s step, it failed.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Yes, EG can work on that.
We always test and support to deploy on Ubuntu 18.04 version.
Var.yml NETWORK_INTERFACE: enp.* ENABLE_PERSISTENCE: true usermgmt_mail_enabled: false
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear , Thanks for your reply, I forgot to mention that I am installing EG on ubuntu Desktop version 18.04.5 on Virutalbox VM.
Do I have to use ubuntu server version 18.04.5?
Can you please suggest me the best OS where EG can run smoothly?
The interface name on my VM is enp0s3 and it’s ip address is 192.168.1.102.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Sshpass is the Linux command and have no relationship of EG itself. It should be the issue of the VM itself when it failed to exec the sshpass command. It will be much helpful if you can send out the details about the permission issue.
Following are the configuration files for your VM.
Host-aio [master] 192.168.1.102
Var.yml Need to know the network interface of this VM.
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear, Thanks a lot for your reply, I was following the offline mode of installation . but it is failing in this below step:
sshpass -p Linux@123 ssh-copy-id -p 22 -o StrictHostKeyChecking=no root@....102
please note: 192.168.1.102 is my VM’s IP address. But it is failing due to some error of permission issue while I am running it as a root user.
I have a request can you please provide me sample configuration files for installation of edge gallery. Files including ( host-aio, var.yml,).
Thanks a lot again for your help.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Hi Vikash,
Yes, EdgeGallery supports to install all in one VM. You can go to EdgeGallery official website to get all you need including the offline install packages and install guide. https://www.edgegallery.org/en/ https://gitee.com/edgegallery/installer/blob/Release-v1.3/ansible_install/README-en.md
We don’t provide any online EG service at this stage.
BR, Dan Xu
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear team, I am trying to install Edge Gallery on my system on one VM only. Is it possible to install all the components on one VM image? Is there any good document on Edge gallery which can guide me to install it without any error? Are you providing any all ready installed edge gallery as a service so that we can use it without worrying about installation of the edge gallery software?
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
Re: Regarding good tutorial on edge gallery installation
Kumar Prasad, Vikash
Thanks for your reply .
We are now using Ubuntu server. and we are getting below error
EdgeGallery-v1.3.0-all-x86/install# root@nano:/home/nano/EdgeGallery-v1.3.0-all-x86/install# ansible-playbook --inventory hosts-aio eg_all_aio_install.yml
TASK [k8s : debug] *****************************************************************************************************ok: [localhost] => { "msg": "Docker is not installed, will be installed" }
TASK [k8s : Create the Directory for Docker Offline Tarball File] ******************************************************ok: [localhost]
TASK [k8s : Unarchive Docker Offline Tarball File] *********************************************************************changed: [localhost]
TASK [k8s : Copy Docker Exec File to /usr/bin] *************************************************************************changed: [localhost] => (item=containerd) changed: [localhost] => (item=containerd-shim) changed: [localhost] => (item=ctr) changed: [localhost] => (item=docker) changed: [localhost] => (item=dockerd) changed: [localhost] => (item=docker-init) changed: [localhost] => (item=docker-proxy) changed: [localhost] => (item=runc)
TASK [k8s : Copy Docker Service File to /etc/systemd/system/] **********************************************************changed: [localhost]
TASK [k8s : System Daemon Reload] **************************************************************************************changed: [localhost]
TASK [k8s : Restart Docker Service] ************************************************************************************changed: [localhost]
TASK [k8s : Wait For Docker Service Has Been Started] ******************************************************************changed: [localhost]
TASK [k8s : Create Directory /etc/docker For Docker Registry] **********************************************************ok: [localhost]
TASK [k8s : Copying script to /tmp for execution] **********************************************************************changed: [localhost]
TASK [k8s : Running script docker-daemon-update.py to append daemon.json] **********************************************changed: [localhost]
TASK [k8s : Restart Docker Service] ************************************************************************************changed: [localhost]
TASK [k8s : Restart Docker Service] ************************************************************************************changed: [localhost]
TASK [k8s : Load k8s Images] *******************************************************************************************changed: [localhost]
TASK [k8s : Install k8s Tools] *****************************************************************************************changed: [localhost] => (item=kubectl) changed: [localhost] => (item=kubeadm) changed: [localhost] => (item=kubelet)
TASK [k8s : Copy Kubelet Service File to /etc/systemd/system/] *********************************************************changed: [localhost]
TASK [k8s : Create the Directory for Kubelet] **************************************************************************changed: [localhost]
TASK [k8s : Copy Kubeadm Config File to /etc/systemd/system/kubelet.service.d] *****************************************changed: [localhost]
TASK [k8s : System Daemon Reload] **************************************************************************************changed: [localhost]
TASK [k8s : Enable kubelet Service] ************************************************************************************changed: [localhost]
TASK [k8s : Stop Firewalld Service] ************************************************************************************fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find the requested service firewalld: host"} ...ignoring
TASK [k8s : Disable Firewalld Service] *********************************************************************************fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not find the requested service firewalld: host"} ...ignoring
TASK [k8s : Off Swap Area] *********************************************************************************************changed: [localhost]
TASK [k8s : Sed File /etc/fstab] ***************************************************************************************[WARNING]: Consider using the replace, lineinfile or template module rather than running 'sed'. If you need to use command because replace, lineinfile or template is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message. changed: [localhost]
TASK [k8s : Modprobe br_netfilter] *************************************************************************************changed: [localhost]
TASK [k8s : Copy K8s Config File to /etc/sysctl.d/] ********************************************************************changed: [localhost]
TASK [k8s : Load Config Files] *****************************************************************************************changed: [localhost]
TASK [k8s : Install Package conntrack] *********************************************************************************changed: [localhost]
TASK [k8s : Install Package socat] *************************************************************************************changed: [localhost]
TASK [k8s : Set Network Interface For Calico] **************************************************************************changed: [localhost]
TASK [k8s : Install k8s with kubeadm] **********************************************************************************fatal: [localhost]: FAILED! => {"changed": true, "cmd": "kubeadm init --kubernetes-version=v1.18.7 --apiserver-advertise-address=localhost --pod-network-cidr=10.244.0.0/16 -v=5", "delta": "0:00:00.129758", "end": "2021-11-30 07:33:28.158670", "msg": "non-zero return code", "rc": 1, "start": "2021-11-30 07:33:28.028912", "stderr": "I1130 07:33:28.155232 5987 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock\ncouldn't use \"localhost\" as \"apiserver-advertise-address\", must be ipv4 or ipv6 address\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.SetAPIEndpointDynamicDefaults\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:114\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.SetInitDynamicDefaults\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:53\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.DefaultedInitConfiguration\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:188\nk8s.io/kubernetes/cmd/kubeadm/app/util/config.LoadOrDefaultInitConfiguration\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:222\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.newInitData\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:330\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func3\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:191\nk8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).InitData\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:183\nk8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:139\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914\nk8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864\nk8s.io/kubernetes/cmd/kubeadm/app.Run\n\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50\nmain.main\n\t_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357", "stderr_lines": ["I1130 07:33:28.155232 5987 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock", "couldn't use \"localhost\" as \"apiserver-advertise-address\", must be ipv4 or ipv6 address", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.SetAPIEndpointDynamicDefaults", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:114", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.SetInitDynamicDefaults", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:53", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.DefaultedInitConfiguration", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:188", "k8s.io/kubernetes/cmd/kubeadm/app/util/config.LoadOrDefaultInitConfiguration", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/config/initconfiguration.go:222", "k8s.io/kubernetes/cmd/kubeadm/app/cmd.newInitData", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:330", "k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func3", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:191", "k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).InitData", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:183", "k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:139", "k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:826", "k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914", "k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864", "k8s.io/kubernetes/cmd/kubeadm/app.Run", "\t/root/kubernetes-1.18.7/_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50", "main.main", "\t_output/local/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25", "runtime.main", "\t/usr/local/go/src/runtime/proc.go:203", "runtime.goexit", "\t/usr/local/go/src/runtime/asm_amd64.s:1357"], "stdout": "", "stdout_lines": []}
PLAY RECAP *************************************************************************************************************localhost : ok=45 changed=33 unreachable=0 failed=1 skipped=26 rescued=0 ignored=4
While installing K8s step, it failed.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Yes, EG can work on that.
We always test and support to deploy on Ubuntu 18.04 version.
Var.yml NETWORK_INTERFACE: enp.* ENABLE_PERSISTENCE: true usermgmt_mail_enabled: false
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear , Thanks for your reply, I forgot to mention that I am installing EG on ubuntu Desktop version 18.04.5 on Virutalbox VM.
Do I have to use ubuntu server version 18.04.5?
Can you please suggest me the best OS where EG can run smoothly?
The interface name on my VM is enp0s3 and it’s ip address is 192.168.1.102.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Sshpass is the Linux command and have no relationship of EG itself. It should be the issue of the VM itself when it failed to exec the sshpass command. It will be much helpful if you can send out the details about the permission issue.
Following are the configuration files for your VM.
Host-aio [master] 192.168.1.102
Var.yml Need to know the network interface of this VM.
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear, Thanks a lot for your reply, I was following the offline mode of installation . but it is failing in this below step:
sshpass -p Linux@123 ssh-copy-id -p 22 -o StrictHostKeyChecking=no root@....102
please note: 192.168.1.102 is my VM’s IP address. But it is failing due to some error of permission issue while I am running it as a root user.
I have a request can you please provide me sample configuration files for installation of edge gallery. Files including ( host-aio, var.yml,).
Thanks a lot again for your help.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Hi Vikash,
Yes, EdgeGallery supports to install all in one VM. You can go to EdgeGallery official website to get all you need including the offline install packages and install guide. https://www.edgegallery.org/en/ https://gitee.com/edgegallery/installer/blob/Release-v1.3/ansible_install/README-en.md
We don’t provide any online EG service at this stage.
BR, Dan Xu
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear team, I am trying to install Edge Gallery on my system on one VM only. Is it possible to install all the components on one VM image? Is there any good document on Edge gallery which can guide me to install it without any error? Are you providing any all ready installed edge gallery as a service so that we can use it without worrying about installation of the edge gallery software?
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
Re: Regarding good tutorial on edge gallery installation
xudan
Yes, EG can work on that.
We always test and support to deploy on Ubuntu 18.04 version.
Var.yml NETWORK_INTERFACE: enp.* ENABLE_PERSISTENCE: true usermgmt_mail_enabled: false
发件人: main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear , Thanks for your reply, I forgot to mention that I am installing EG on ubuntu Desktop version 18.04.5 on Virutalbox VM.
Do I have to use ubuntu server version 18.04.5?
Can you please suggest me the best OS where EG can run smoothly?
The interface name on my VM is enp0s3 and it’s ip address is 192.168.1.102.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Sshpass is the Linux command and have no relationship of EG itself. It should be the issue of the VM itself when it failed to exec the sshpass command. It will be much helpful if you can send out the details about the permission issue.
Following are the configuration files for your VM.
Host-aio [master] 192.168.1.102
Var.yml Need to know the network interface of this VM.
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear, Thanks a lot for your reply, I was following the offline mode of installation . but it is failing in this below step:
sshpass -p Linux@123 ssh-copy-id -p 22 -o StrictHostKeyChecking=no root@....102
please note: 192.168.1.102 is my VM’s IP address. But it is failing due to some error of permission issue while I am running it as a root user.
I have a request can you please provide me sample configuration files for installation of edge gallery. Files including ( host-aio, var.yml,).
Thanks a lot again for your help.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Hi Vikash,
Yes, EdgeGallery supports to install all in one VM. You can go to EdgeGallery official website to get all you need including the offline install packages and install guide. https://www.edgegallery.org/en/ https://gitee.com/edgegallery/installer/blob/Release-v1.3/ansible_install/README-en.md
We don’t provide any online EG service at this stage.
BR, Dan Xu
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear team, I am trying to install Edge Gallery on my system on one VM only. Is it possible to install all the components on one VM image? Is there any good document on Edge gallery which can guide me to install it without any error? Are you providing any all ready installed edge gallery as a service so that we can use it without worrying about installation of the edge gallery software?
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
Architecture & TSC Joint Meeting/ Wed 19:00-20:00
Gaoweitao(Victor) <victor.gao@...>
Hi All,
明天晚上架构组和TSC召开联合会议,议题如下:
@PTLs,请大家简单准备下。
BR
Victor
|
||||||||||||||
|
||||||||||||||
Re: Regarding good tutorial on edge gallery installation
Kumar Prasad, Vikash
Dear , Thanks for your reply, I forgot to mention that I am installing EG on ubuntu Desktop version 18.04.5 on Virutalbox VM.
Do I have to use ubuntu server version 18.04.5?
Can you please suggest me the best OS where EG can run smoothly?
The interface name on my VM is enp0s3 and it’s ip address is 192.168.1.102.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Sshpass is the Linux command and have no relationship of EG itself. It should be the issue of the VM itself when it failed to exec the sshpass command. It will be much helpful if you can send out the details about the permission issue.
Following are the configuration files for your VM.
Host-aio [master] 192.168.1.102
Var.yml Need to know the network interface of this VM.
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear, Thanks a lot for your reply, I was following the offline mode of installation . but it is failing in this below step:
sshpass -p Linux@123 ssh-copy-id -p 22 -o StrictHostKeyChecking=no root@....102
please note: 192.168.1.102 is my VM’s IP address. But it is failing due to some error of permission issue while I am running it as a root user.
I have a request can you please provide me sample configuration files for installation of edge gallery. Files including ( host-aio, var.yml,).
Thanks a lot again for your help.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Hi Vikash,
Yes, EdgeGallery supports to install all in one VM. You can go to EdgeGallery official website to get all you need including the offline install packages and install guide. https://www.edgegallery.org/en/ https://gitee.com/edgegallery/installer/blob/Release-v1.3/ansible_install/README-en.md
We don’t provide any online EG service at this stage.
BR, Dan Xu
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear team, I am trying to install Edge Gallery on my system on one VM only. Is it possible to install all the components on one VM image? Is there any good document on Edge gallery which can guide me to install it without any error? Are you providing any all ready installed edge gallery as a service so that we can use it without worrying about installation of the edge gallery software?
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
Re: Regarding good tutorial on edge gallery installation
xudan
Sshpass is the Linux command and have no relationship of EG itself. It should be the issue of the VM itself when it failed to exec the sshpass command. It will be much helpful if you can send out the details about the permission issue.
Following are the configuration files for your VM.
Host-aio [master] 192.168.1.102
Var.yml Need to know the network interface of this VM.
发件人: main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear, Thanks a lot for your reply, I was following the offline mode of installation . but it is failing in this below step:
sshpass -p Linux@123 ssh-copy-id -p 22 -o StrictHostKeyChecking=no root@....102
please note: 192.168.1.102 is my VM’s IP address. But it is failing due to some error of permission issue while I am running it as a root user.
I have a request can you please provide me sample configuration files for installation of edge gallery. Files including ( host-aio, var.yml,).
Thanks a lot again for your help.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Hi Vikash,
Yes, EdgeGallery supports to install all in one VM. You can go to EdgeGallery official website to get all you need including the offline install packages and install guide. https://www.edgegallery.org/en/ https://gitee.com/edgegallery/installer/blob/Release-v1.3/ansible_install/README-en.md
We don’t provide any online EG service at this stage.
BR, Dan Xu
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear team, I am trying to install Edge Gallery on my system on one VM only. Is it possible to install all the components on one VM image? Is there any good document on Edge gallery which can guide me to install it without any error? Are you providing any all ready installed edge gallery as a service so that we can use it without worrying about installation of the edge gallery software?
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
Re: Regarding good tutorial on edge gallery installation
Kumar Prasad, Vikash
Dear, Thanks a lot for your reply, I was following the offline mode of installation . but it is failing in this below step:
sshpass -p Linux@123 ssh-copy-id -p 22 -o StrictHostKeyChecking=no root@....102
please note: 192.168.1.102 is my VM’s IP address. But it is failing due to some error of permission issue while I am running it as a root user.
I have a request can you please provide me sample configuration files for installation of edge gallery. Files including ( host-aio, var.yml,).
Thanks a lot again for your help.
Thanks Vikash kumar prasad
From: main@edgegallery.groups.io <main@edgegallery.groups.io>
On Behalf Of xudan via groups.io
Hi Vikash,
Yes, EdgeGallery supports to install all in one VM. You can go to EdgeGallery official website to get all you need including the offline install packages and install guide. https://www.edgegallery.org/en/ https://gitee.com/edgegallery/installer/blob/Release-v1.3/ansible_install/README-en.md
We don’t provide any online EG service at this stage.
BR, Dan Xu
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear team, I am trying to install Edge Gallery on my system on one VM only. Is it possible to install all the components on one VM image? Is there any good document on Edge gallery which can guide me to install it without any error? Are you providing any all ready installed edge gallery as a service so that we can use it without worrying about installation of the edge gallery software?
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
Re: Regarding good tutorial on edge gallery installation
Kanagaraj Manickam
Hi Vikash, Deploying edge gallery is very easy and once you install it please refer http://docs.edgegallery.org/ for going thru the tutorial and let us know if any help required.
Regards Kanag
From: main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
On Behalf Of xudan via groups.io
Hi Vikash,
Yes, EdgeGallery supports to install all in one VM. You can go to EdgeGallery official website to get all you need including the offline install packages and install guide. https://www.edgegallery.org/en/ https://gitee.com/edgegallery/installer/blob/Release-v1.3/ansible_install/README-en.md
We don’t provide any online EG service at this stage.
BR, Dan Xu
发件人:
main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear team, I am trying to install Edge Gallery on my system on one VM only. Is it possible to install all the components on one VM image? Is there any good document on Edge gallery which can guide me to install it without any error? Are you providing any all ready installed edge gallery as a service so that we can use it without worrying about installation of the edge gallery software?
Thanks Vikash kumar prasad
|
||||||||||||||
|
||||||||||||||
Re: Regarding good tutorial on edge gallery installation
xudan
Hi Vikash,
Yes, EdgeGallery supports to install all in one VM. You can go to EdgeGallery official website to get all you need including the offline install packages and install guide. https://www.edgegallery.org/en/ https://gitee.com/edgegallery/installer/blob/Release-v1.3/ansible_install/README-en.md
We don’t provide any online EG service at this stage.
BR, Dan Xu
发件人: main@edgegallery.groups.io [mailto:main@edgegallery.groups.io]
代表 Kumar Prasad, Vikash
Dear team, I am trying to install Edge Gallery on my system on one VM only. Is it possible to install all the components on one VM image? Is there any good document on Edge gallery which can guide me to install it without any error? Are you providing any all ready installed edge gallery as a service so that we can use it without worrying about installation of the edge gallery software?
Thanks Vikash kumar prasad
|
||||||||||||||
|