public inbox for gentoo-dev@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-dev] [GSoC2012] Cross Container Support Project
@ 2012-03-23 10:36 Jing Huang
  2012-03-23 10:46 ` Alexey Shvetsov
  0 siblings, 1 reply; 4+ messages in thread
From: Jing Huang @ 2012-03-23 10:36 UTC (permalink / raw
  To: gentoo-soc, gentoo-dev; +Cc: Luca Barbato

[-- Attachment #1: Type: text/plain, Size: 3704 bytes --]

Hi Everyone,

I am a student at Peking University in China. I am very interested in the project of Cross Container Support(http://wiki.gentoo.org/wiki/Google_Summer_of_Code/2012/Ideas#Cross_Container_Support). I have some ideas about the project. Please help me to examine the thoughts.
First, I think downloading stage and portage packets into specified directory each time needs to be impoved(http://www.gentoo.org/proj/en/base/embedded/handbook/index.xml?part=1&chap=5). It needs to build execution environment every time. So it is not convenient. On the other hand, the files in specified directory would be modified by some process potentially. It is not an isolated execution environment at all. Therefore, I would make some img files for the each arch, including arm, mips, etc. The img file contains arch-stage and portage. When creating the qemu-user container, the iniscript mounts the img file into specified directory then chroot to it.
Second, if the process accesses disk frequently, I would to make a tmpfs file for the qemu-user container. The process in the container is running on tmpfs file and been sped up.
Third, I would custom a lightweight qemu-user container for the specified process if necessary. In my previous work, I made a custom ramdisk VM for the process by modifying the mkinitrd script. With the help of “ldd –v ” command, I could get the shared libraries of the process and packet them into ramdisk. But in Gentoo, maybe I could custom the qemu-user container using the USE label. 
In my proposal, this project uses a small quantity of bash to implement just the core tools (create, destroy, enter). In simpler terms, I plan to implement them in this way:
1. create routine
# qemu_container_create config_file
 The config_file specifies arch, arch-img file, chroot directory, additional args of qemu(like "-cpu cortex-a8"). Then the create routine will execute as:
   1). If having the arch-img then mount it into chroot directory.
   2). If not, make a new img file, download stageball&portage, unpack them to the chroot directory.
   2). modprobe binfmt_misc and register the qemu-user-arch to the binfmt_misc.
   3). install the static qemu-user into the chroot directory and mount the required directories.
   4). register this new container to our managment tool. The register info includes container_id, stageball version, stageball arch, chroot directory, etc.
 
2. enter routine
#qemu_container_enter container_id
   The enter routine opens a terminal and chroot into the environment. The management tool should also set the container is in "running" state.

3. destroy routine
#qemu_container_destroy container_id
    1). exits from chroot environment
    2). unmount stuff when not in use
    3). clear the qemu-user-arch in binfmt_misc register file (maybe other containers use it)
    4). remove the container info from managment tool
 
Besides these routines, I would also implement container_list and container_export routines. The former lists the info/state of containers. The latter is used to export system images.
 
Questions:
1). Why integrate qemu-user container with crossdev? Crossdev is a cross compiler. The qemu-user container not only compiles the heterogeneous programs but also tests them. I thought if the qemu-user container was good enough, it could replace the crossdev.
2). "An additional task is to support layered systems so native userspace can be used to further speed up the process (hybrid chroot)."  I don't very understand the task. Could someone help me and explain the “hybrid chroot”? 

Would someone give me some suggestions? Any comments will be much appreciated.

Jing Huang.

[-- Attachment #2: Type: text/html, Size: 8836 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [gentoo-dev] [GSoC2012] Cross Container Support Project
  2012-03-23 10:36 [gentoo-dev] [GSoC2012] Cross Container Support Project Jing Huang
@ 2012-03-23 10:46 ` Alexey Shvetsov
  2012-03-23 11:16   ` Brian Harring
  0 siblings, 1 reply; 4+ messages in thread
From: Alexey Shvetsov @ 2012-03-23 10:46 UTC (permalink / raw
  To: gentoo-dev

Hi!

Well i have 2 arm lxc containers on amd64 machine. Its works good if 
qemu support most of needed cross arch instructions

Jing Huang писал 2012-03-23 13:36:
> Hi Everyone,
>
> I am a student at Peking University in China. I am very interested in
> the project of Cross Container Support(). I have some ideas about the
> project. Please help me to examine the thoughts.
>
> First, I think downloading stage and portage packets into specified
> directory each time needs to be impoved(). It needs to build 
> execution
> environment every time. So it is not convenient. On the other hand,
> the files in specified directory would be modified by some process
> potentially. It is not an isolated execution environment at all.
> Therefore, I would make some img files for the each arch, including
> arm, mips, etc. The img file contains arch-stage and portage. When
> creating the qemu-user container, the iniscript mounts the img file
> into specified directory then chroot to it.
>
> Second, if the process accesses disk frequently, I would to make a
> tmpfs file for the qemu-user container. The process in the container
> is running on tmpfs file and been sped up.
>
> Third, I would custom a lightweight qemu-user container for the
> specified process if necessary. In my previous work, I made a custom
> ramdisk VM for the process by modifying the mkinitrd script. With the
> help of “ldd –v ” command, I could get the shared libraries of
> the process and packet them into ramdisk. But in Gentoo, maybe I 
> could
> custom the qemu-user container using the USE label.
>
> In my proposal, this project uses a small quantity of bash to
> implement just the core tools (create, destroy, enter). In simpler
> terms, I plan to implement them in this way:
>
> 1. create routine
>
> # qemu_container_create config_file
>
>  The config_file specifies arch, arch-img file, chroot directory,
> additional args of qemu(like "-cpu cortex-a8"). Then the create
> routine will execute as:
>
>  1). If having the arch-img then mount it into chroot directory.
>
>  2). If not, make a new img file, download stageball&portage, unpack
> them to the chroot directory.
>
>  2). modprobe binfmt_misc and register the qemu-user-arch to the
> binfmt_misc.
>
>  3). install the static qemu-user into the chroot directory and mount
> the required directories.
>
>  4). register this new container to our managment tool. The register
> info includes container_id, stageball version, stageball arch, chroot
> directory, etc.
>
> 2. enter routine
>
> #qemu_container_enter container_id
>
>  The enter routine opens a terminal and chroot into the environment.
> The management tool should also set the container is in "running"
> state.
>
> 3. destroy routine
>
> #qemu_container_destroy container_id
>
>  1). exits from chroot environment
>
>  2). unmount stuff when not in use
>
>  3). clear the qemu-user-arch in binfmt_misc register file (maybe
> other containers use it)
>
>  4). remove the container info from managment tool
>
> Besides these routines, I would also implement container_list and
> container_export routines. The former lists the info/state of
> containers. The latter is used to export system images.
>
> Questions:
>
> 1). Why integrate qemu-user container with crossdev? Crossdev is a
> cross compiler. The qemu-user container not only compiles the
> heterogeneous programs but also tests them. I thought if the 
> qemu-user
> container was good enough, it could replace the crossdev.
>
> 2). "An additional task is to support layered systems so native
> userspace can be used to further speed up the process (hybrid
> chroot)." I don't very understand the task. Could someone help me and
> explain the “hybrid chroot”?
>
> Would someone give me some suggestions? Any comments will be much
> appreciated.
>
> Jing Huang.

-- 
Best Regards,
Alexey 'Alexxy' Shvetsov
Petersburg Nuclear Physics Institute, Russia
Department of Molecular and Radiation Biophysics
Gentoo Team Ru
Gentoo Linux Dev
mailto:alexxyum@gmail.com
mailto:alexxy@gentoo.org
mailto:alexxy@omrb.pnpi.spb.ru



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [gentoo-dev] [GSoC2012] Cross Container Support Project
  2012-03-23 10:46 ` Alexey Shvetsov
@ 2012-03-23 11:16   ` Brian Harring
  2012-03-24 20:57     ` Luca Barbato
  0 siblings, 1 reply; 4+ messages in thread
From: Brian Harring @ 2012-03-23 11:16 UTC (permalink / raw
  To: gentoo-dev

On Fri, Mar 23, 2012 at 01:46:17PM +0300, Alexey Shvetsov wrote:
> Hi!
> 
> Well i have 2 arm lxc containers on amd64 machine. Its works good if 
> qemu support most of needed cross arch instructions

I'd be curious how much of that is native, vs emulated.   The hybrid 
approach of scratchbox/obs has some definite gains.

If we had a clean way to mark which can be native (toolchain), the 
perf gain is definitely worth the work...

~brian



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [gentoo-dev] [GSoC2012] Cross Container Support Project
  2012-03-23 11:16   ` Brian Harring
@ 2012-03-24 20:57     ` Luca Barbato
  0 siblings, 0 replies; 4+ messages in thread
From: Luca Barbato @ 2012-03-24 20:57 UTC (permalink / raw
  To: gentoo-dev

On 23/03/12 04:16, Brian Harring wrote:
> On Fri, Mar 23, 2012 at 01:46:17PM +0300, Alexey Shvetsov wrote:
>> Hi!
>>
>> Well i have 2 arm lxc containers on amd64 machine. Its works good if 
>> qemu support most of needed cross arch instructions
> 
> I'd be curious how much of that is native, vs emulated.   The hybrid 
> approach of scratchbox/obs has some definite gains.
> 
> If we had a clean way to mark which can be native (toolchain), the 
> perf gain is definitely worth the work...

the rough part is mostly making so portage knows the paths and have the
bind-mount game working, the alternative way is to build the native part
by unpacking the cross packages and the build system packages there


so

/ <- emulated
/etc/ld.so.conf.d/native
/usr/${nativehost}/
/usr/${emulatedhost}/

and then you need to trick portage a bit

Sounds gory already? =)

lu

-- 

Luca Barbato
Gentoo/linux
http://dev.gentoo.org/~lu_zero




^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-03-24 20:58 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-03-23 10:36 [gentoo-dev] [GSoC2012] Cross Container Support Project Jing Huang
2012-03-23 10:46 ` Alexey Shvetsov
2012-03-23 11:16   ` Brian Harring
2012-03-24 20:57     ` Luca Barbato

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox