Futures

This page has random ideas and methods for future toolchain work.

Style

Work may be around:

  • New features
  • Iterative improvement
  • Ongoing support

New features such as adding KVM support can be planned and scheduled. Iterative improvement is measured by velocity in patches or net improvement. Ongoing support is an overhead for products and may be capped.

Work may centre around:

  • A topic, such as Use all of the SoC
  • A technology, such as LLVM or OpenCL
  • A outcome, such as improve performance by x %

Topics are preferred.

Technology work may enable others instead of being an end to itself. LLVM doesn't make sense as a static C compiler, but improving it may enable RenderScript, shader compilers, or other topics.

Outcome work is used on iterative processes such as performance or correctness work.

Areas of work may come from:

  • Needs from the TSC
  • Suggestions from the working group to the TSC
  • The skills in the working group

Skill based works well on many medium projects when there's a good number of people in the group.

Areas

We work on everything performance related. Our work can:

  • Directly improve performance, such as
    • Compiler optimisations
    • Faster base libraries
    • Making faster versions easy to pick up and use
  • Allow the end developer to improve performance by:
    • Documenting and spreading methods
    • Making it easier to investigate performance problems through profiling and trace
    • Adding infrastructure to build optimisations on, like IFUNC/hwcap runtime selection

We work on Android and GLIBC Linux.

We mainly care about time based performance. We mainly work on CPU bound code. We don't normally look at boot time, load time, or how interactive a program is.

Combinations

The combination explosion comes from:

  • Profiles: mobile, server
  • Language: C, Java/Dalvik
  • Host: GLIBC, Android
  • Tool: linker, loader, compiler, profiler, debugger, trace, emulator
  • Use: product, distribution
  • Build: native, cross
  • Cores: single thread, multi thread, multi process, multi core, many core

The kernel and architecture are fixed.

Developer productivity?

  • 'Fission': split debug for faster link/debug
  • Rolling valgrind features into the compiler

Compiler optimisations for dynamic languages and JITs

PENDING

QEMU doesn't fit in here.

Concurrency?

Enforce coding style?

Runtime specialisation? Parallel?

NaCl?

MILEPOST?

Kernel: perf, ftrace, perf timechart

Not: code review, static analysis, IDE, metrics, test frameworks

MichaelHope/Sandbox/Futures (last modified 2012-06-28 04:32:57)