Android 3.0 Platform

API Level: 11

For developers, the Android 3.0 platform is available as a downloadable component for the Android SDK. The downloadable platform includes an Android library and system image, as well as a set of emulator skins and more. The downloadable platform includes no external libraries.

To get started developing or testing against Android 3.0, use the Android SDK Manager to download the platform into your SDK. For more information, see Adding SDK Components. If you are new to Android, download the SDK Starter Package first.

For a high-level introduction to Android 3.0, see the Platform Highlights.

Note: If you've already published an Android application, please test and optimize your application on Android 3.0 as soon as possible. You should do so to be sure your application provides the best experience possible on the latest Android-powered devices. For information about what you can do, read Optimizing Apps for Android 3.0.

Revisions

To determine what revision of the Android 3.0 platform you have installed, refer to the "Installed Packages" listing in the Android SDK and AVD Manager.

Android 3.0, Revision 2 (July 2011)

Dependencies:

Requires SDK Tools r12 or higher.

Notes:

Improvements to the platform's rendering library to support the visual layout editor in the ADT Eclipse plugin. This revision allows for more drawing features in ADT and fixes several bugs in the previous rendering library. It also unlocks several editor features that were added in ADT 12.

Android 3.0, Revision 1 (February 2011)

Dependencies:

Requires SDK Tools r10 or higher.

API Overview

The sections below provide a technical overview of what's new for developers in Android 3.0, including new features and changes in the framework API since the previous version.

Fragments

A fragment is a new framework component that allows you to separate distinct elements of an activity into self-contained modules that define their own UI and lifecycle. To create a fragment, you must extend the Fragment class and implement several lifecycle callback methods, similar to an Activity. You can then combine multiple fragments in a single activity to build a multi-pane UI in which each pane manages its own lifecycle and user inputs.

You can also use a fragment without providing a UI and instead use the fragment as a worker for the activity, such as to manage the progress of a download that occurs only while the activity is running.

Additionally:

  • Fragments are self-contained and you can reuse them in multiple activities
  • You can add, remove, replace and animate fragments inside the activity
  • You can add fragments to a back stack managed by the activity, preserving the state of fragments as they are changed and allowing the user to navigate backward through the different states
  • By providing alternative layouts, you can mix and match fragments, based on the screen size and orientation
  • Fragments have direct access to their container activity and can contribute items to the activity's Action Bar (discussed next)

To manage the fragments in your activity, you must use the FragmentManager, which provides several APIs for interacting with fragments, such as finding fragments in the activity and popping fragments off the back stack to restore their previous state.

To perform a transaction, such as add or remove a fragment, you must create a FragmentTransaction. You can then call methods such as add() remove(), or replace(). Once you've applied all the changes you want to perform for the transaction, you must call commit() and the system applies the fragment transaction to the activity.

For more information about using fragments, read the Fragments documentation. Several samples are also available in the API Demos application.

Action Bar

The Action Bar is a replacement for the traditional title bar at the top of the activity window. It includes the application logo in the left corner and provides a new interface for items in the Options Menu. Additionally, the Action Bar allows you to:

  • Add menu items directly in the Action Bar—as "action items."

    In your XML declaration for the menu item, include the android:showAsAction attribute with a value of "ifRoom". When there's enough room, the menu item appears directly in the Action Bar. Otherwise, the item is placed in the overflow menu, revealed by the menu icon on the right side of the Action Bar.

  • Replace an action item with a widget (such as a search box)—creating an "action view."

    In the XML declaration for the menu item, add the android:actionViewLayout attribute with a layout resource or the android:actionViewClass attribute with the class name of a widget. (You must also declare the android:showAsAction attribute so that the item appears in the Action Bar.) If there's not enough room in the Action Bar and the item appears in the overflow menu, it behaves like a regular menu item and does not show the widget.

  • Add an action to the application logo and replace it with a custom logo

    The application logo is automatically assigned the android.R.id.home ID, which the system delivers to your activity's onOptionsItemSelected() callback when touched. Simply respond to this ID in your callback method to perform an action such as go to your application's "home" activity.

    To replace the icon with a logo, specify your application logo in the manifest file with the android:logo attribute, then call setDisplayUseLogoEnabled(true) in your activity.

  • Add breadcrumbs to navigate backward through the back stack of fragments
  • Add tabs or a drop-down list to navigate through fragments
  • Customize the Action Bar with themes and backgrounds

The Action Bar is standard for all applications that use the new holographic theme, which is also standard when you set either the android:minSdkVersion or android:targetSdkVersion to "11".

For more information about the Action Bar, read the Action Bar documentation. Several samples are also available in the API Demos application.

System clipboard

Applications can now copy and paste data (beyond mere text) to and from the system-wide clipboard. Clipped data can be plain text, a URI, or an intent.

By providing the system access to the data you want the user to copy, through a content provider, the user can copy complex content (such as an image or data structure) from your application and paste it into another application that supports that type of content.

To start using the clipboard, get the global ClipboardManager object by calling getSystemService(CLIPBOARD_SERVICE).

To copy an item to the clipboard, you need to create a new ClipData object, which holds one or more ClipData.Item objects, each describing a single entity. To create a ClipData object containing just one ClipData.Item, you can use one of the helper methods, such as newPlainText(), newUri(), and newIntent(), which each return a ClipData object pre-loaded with the ClipData.Item you provide.

To add the ClipData to the clipboard, pass it to setPrimaryClip() for your instance of ClipboardManager.

You can then read a file from the clipboard (in order to paste it) by calling getPrimaryClip() on the ClipboardManager. Handling the ClipData you receive can be complicated and you need to be sure you can actually handle the data type in the clipboard before attempting to paste it.

The clipboard holds only one piece of clipped data (a ClipData object) at a time, but one ClipData can contain multiple ClipData.Items.

For more information, read the Copy and Paste documentation. You can also see a simple implementation of copy and paste in the API Demos and a more complete implementation in the Note Pad application.

Drag and drop

New APIs simplify drag and drop operations in your application's user interface. A drag operation is the transfer of some kind of data—carried in a ClipData object—from one place to another. The start and end point for the drag operation is a View, so the APIs that directly handle the drag and drop operations are in the View class.

A drag and drop operation has a lifecycle that's defined by several drag actions—each defined by a DragEvent object—such as ACTION_DRAG_STARTED, ACTION_DRAG_ENTERED, and ACTION_DROP. Each view that wants to participate in a drag operation can listen for these actions.

To begin dragging content in your activity, call startDrag() on a View, providing a ClipData object that represents the data to drag, a View.DragShadowBuilder to facilitate the "shadow" that users see under their fingers while dragging, and an Object that can share information about the drag object with views that may receive the object.

To accept a drag object in a View (receive the "drop"), register the view with an OnDragListener by calling setOnDragListener(). When a drag event occurs on the view, the system calls onDrag() for the OnDragListener, which receives a DragEvent describing the type of drag action has occurred (such as ACTION_DRAG_STARTED, ACTION_DRAG_ENTERED, and ACTION_DROP). During a drag, the system repeatedly calls onDrag() for the view underneath the drag, to deliver a stream of drag events. The receiving view can inquire the event type delivered to onDragEvent() by calling getAction() on the DragEvent.

Note: Although a drag event may carry a ClipData object, this is not related to the system clipboard. A drag and drop operation should never put the dragged data in the system clipboard.

For more information, read the Dragging and Dropping documentation. You can also see an implementation of drag and drop in the API Demos application and the Honeycomb Gallery application.

App widgets

Android 3.0 supports several new widget classes for more interactive app widgets on the users Home screen, including: GridView, ListView, StackView, ViewFlipper, and AdapterViewFlipper.

More importantly, you can use the new RemoteViewsService to create app widgets with collections, using widgets such as GridView, ListView, and StackView that are backed by remote data, such as from a content provider.

The AppWidgetProviderInfo class (defined in XML with an <appwidget-provider> element) also supports two new fields: autoAdvanceViewId and previewImage. The autoAdvanceViewId field lets you specify the view ID of the app widget subview that should be auto-advanced by the app widget’s host. The previewImage field specifies a preview of what the app widget looks like and is shown to the user from the widget picker. If this field is not supplied, the app widget's icon is used for the preview.

To help create a preview image for your app widget (to specify in the previewImage field), the Android emulator includes an application called "Widget Preview." To create a preview image, launch this application, select the app widget for your application and set it up how you'd like your preview image to appear, then save it and place it in your application's drawable resources.

You can see an implementation of the new app widget features in the StackView App Widget and Weather List Widget applications.

Status bar notifications

The Notification APIs have been extended to support more content-rich status bar notifications, plus a new Notification.Builder class allows you to easily create Notification objects.

New features include:

  • Support for a large icon in the notification, using setLargeIcon(). This is usually for social applications to show the contact photo of the person who is the source of the notification or for media apps to show an album thumbnail.
  • Support for custom layouts in the status bar ticker, using setTicker().
  • Support for custom notification layouts to include buttons with PendingIntents, for more interactive notification widgets. For example, a notification can control music playback without starting an activity.

Content loaders

New framework APIs facilitate asynchronous loading of data using the Loader class. You can use it in combination with UI components such as views and fragments to dynamically load data from worker threads. The CursorLoader subclass is specially designed to help you do so for data backed by a ContentProvider.

All you need to do is implement the LoaderCallbacks interface to receive callbacks when a new loader is requested or the data has changed, then call initLoader() to initialize the loader for your activity or fragment.

For more information, read the Loaders documentation. You can also see example code using loaders in the LoaderCursor and LoaderThrottle samples.

Bluetooth A2DP and headset APIs

Android now includes APIs for applications to verify the state of connected Bluetooth A2DP and headset profile devices. For example, applications can identify when a Bluetooth headset is connected for listening to music and notify the user as appropriate. Applications can also receive broadcasts for vendor specific AT commands and notify the user about the state of the connected device, such as when the connected device's battery is low.

You can initialize the respective BluetoothProfile by calling getProfileProxy() with either the A2DP or HEADSET profile constant and a BluetoothProfile.ServiceListener to receive callbacks when the Bluetooth client is connected or disconnected.

Animation framework

An all new flexible animation framework allows you to animate arbitrary properties of any object (View, Drawable, Fragment, Object, or anything else). It allows you to define several aspects of an animation, such as:

  • Duration
  • Repeat amount and behavior
  • Type of time interpolation
  • Animator sets to play animations together, sequentially, or after specified delays
  • Frame refresh delay

You can define these animation aspects, and others, for an object's int, float, and hexadecimal color values, by default. That is, when an object has a property field for one of these types, you can change its value over time to affect an animation. To animate any other type of value, you tell the system how to calculate the values for that given type, by implementing the TypeEvaluator interface.

There are two animators you can use to animate the values of a property: ValueAnimator and ObjectAnimator. The ValueAnimator computes the animation values, but is not aware of the specific object or property that is animated as a result. It simply performs the calculations, and you must listen for the updates and process the data with your own logic. The ObjectAnimator is a subclass of ValueAnimator and allows you to set the object and property to animate, and it handles all animation work. That is, you give the ObjectAnimator the object to animate, the property of the object to change over time, and a set of values to apply to the property over time, then start the animation.

Additionally, the LayoutTransition class enables automatic transition animations for changes you make to your activity layout. To enable transitions for part of the layout, create a LayoutTransition object and set it on any ViewGroup by calling setLayoutTransition(). This causes default animations to run whenever items are added to or removed from the group. To specify custom animations, call setAnimator() on the LayoutTransition and provide a custom Animator, such as a ValueAnimator or ObjectAnimator discussed above.

For more information, see the Property Animation documentation. You can also see several samples using the animation APIs in the API Demos application.

Extended UI framework

  • Multiple-choice selection for ListView and GridView

    New CHOICE_MODE_MULTIPLE_MODAL mode for setChoiceMode() allows users to select multiple items from a ListView or GridView. When used in conjunction with the Action Bar, users can select multiple items and then select the action to perform from a list of options in the Action Bar (which has transformed into a Multi-choice Action Mode).

    To enable multiple-choice selection, call setChoiceMode(CHOICE_MODE_MULTIPLE_MODAL) and register a MultiChoiceModeListener with setMultiChoiceModeListener().

    When the user performs a long-press on an item, the Action Bar switches to the Multi-choice Action Mode. The system notifies the MultiChoiceModeListener when items are selected by calling onItemCheckedStateChanged().

    For an example of multiple-choice selection, see the List15. java class in the API Demos sample application.

  • New APIs to transform views

    New APIs allow you to easily apply 2D and 3D transformations to views in your activity layout. New transformations are made possible with a set of object properties that define the view's layout position, orientation, transparency and more.

    New methods to set the view properties include: setAlpha(), setBottom(), setLeft(), setRight(), setBottom(), setPivotX(), setPivotY(), setRotationX(), setRotationY(), setScaleX(), setScaleY(), setAlpha(), and others.

    Some methods also have a corresponding XML attribute that you can specify in your layout file, to apply a default transformation. Available attributes include: translationX, translationY, rotation, rotationX, rotationY, scaleX, scaleY, transformPivotX, transformPivotY, and alpha.

    Using some of these new view properties in combination with the new animation framework (discussed above), you can easily apply some fancy animations to your views. For example, to rotate a view on its y-axis, supply ObjectAnimator with the View, the "rotationY" property, and the start and end values:

    ObjectAnimator animator = ObjectAnimator.ofFloat(myView, "rotationY", 0, 360);
    animator.setDuration(2000);
    animator.start();
    
  • New holographic themes

    The standard system widgets and overall look have been redesigned and incorporate a new "holographic" user interface theme. The system applies the new theme using the standard style and theme system.

    Any application that targets the Android 3.0 platform—by setting either the android:minSdkVersion or android:targetSdkVersion value to "11"—inherits the holographic theme by default. However, if your application also applies its own theme, then your theme will override the holographic theme, unless you update your styles to inherit the holographic theme.

    To apply the holographic theme to individual activities or to inherit them in your own theme definitions, use one of several new Theme.Holo themes. If your application is compatible with version of Android lower than 3.0 and applies custom themes, then you should select a theme based on platform version.

  • New widgets
    • AdapterViewAnimator

      Base class for an AdapterView that performs animations when switching between its views.

    • AdapterViewFlipper

      Simple ViewAnimator that animates between two or more views that have been added to it. Only one child is shown at a time. If requested, it can automatically flip between each child at a regular interval.

    • CalendarView

      Allows users to select dates from a calendar by touching the date and can scroll or fling the calendar to a desired date. You can configure the range of dates available in the widget.

    • ListPopupWindow

      Anchors itself to a host view and displays a list of choices, such as for a list of suggestions when typing into an EditText view.

    • NumberPicker

      Enables the user to select a number from a predefined range. The widget presents an input field and up and down buttons for selecting a number. Touching the input field allows the user to scroll through values or touch again to directly edit the current value. It also allows you to map positions to strings, so that the corresponding string is displayed instead of the index position.

    • PopupMenu

      Displays a Menu in a modal popup window that's anchored to a view. The popup appears below the anchor view if there is room, or above it if there is not. If the IME (soft keyboard) is visible, the popup does not overlap the IME it until the user touches the menu.

    • SearchView

      Provides a search box that you can configure to deliver search queries to a specified activity and display search suggestions (in the same manner as the traditional search dialog). This widget is particularly useful for offering a search widget in the Action Bar. For more information, see Creating a Search Interface.

    • StackView

      A view that displays its children in a 3D stack and allows users to swipe through views like a rolodex.

Graphics

  • Hardware accelerated 2D graphics

    You can now enable the OpenGL renderer for your application by setting android:hardwareAccelerated="true" in your manifest element's <application> element or for individual <activity> elements.

    This flag helps applications by making them draw faster. This results in smoother animations, smoother scrolling, and overall better performance and response to user interaction.

  • View support for hardware and software layers

    By default, a View has no layer specified. You can specify that the view be backed by either a hardware or software layer, specified by values LAYER_TYPE_HARDWARE and LAYER_TYPE_SOFTWARE, using setLayerType() or the layerType attribute.

    A hardware layer is backed by a hardware specific texture (generally Frame Buffer Objects or FBO on OpenGL hardware) and causes the view to be rendered using Android's hardware rendering pipeline, but only if hardware acceleration is turned on for the view hierarchy. When hardware acceleration is turned off, hardware layers behave exactly as software layers.

    A software layer is backed by a bitmap and causes the view to be rendered using Android's software rendering pipeline, even if hardware acceleration is enabled. Software layers should be avoided when the affected view tree updates often. Every update will require to re-render the software layer, which can potentially be slow.

    For more information, see the LAYER_TYPE_HARDWARE and LAYER_TYPE_SOFTWARE documentation.

  • Renderscript 3D graphics engine

    Renderscript is a runtime 3D framework that provides both an API for building 3D scenes as well as a special, platform-independent shader language for maximum performance. Using Renderscript, you can accelerate graphics operations and data processing. Renderscript is an ideal way to create high-performance 3D effects for applications, wallpapers, carousels, and more.

    For more information, see the 3D Rendering and Computation with Renderscript documentation.

Media

  • Time lapse video

    Camcorder APIs now support the ability to record time lapse video. The setCaptureRate() sets the rate at which frames should be captured.

  • Texture support for image streams

    New SurfaceTexture allows you to capture an image stream as an OpenGL ES texture. By calling setPreviewTexture() for your Camera instance, you can specify the SurfaceTexture upon which to draw video playback or preview frames from the camera.

  • HTTP Live streaming

    Applications can now pass an M3U playlist URL to the media framework to begin an HTTP Live streaming session. The media framework supports most of the HTTP Live streaming specification, including adaptive bit rate. See the Supported Media Formats document for more information.

  • EXIF data

    The ExifInterface includes new fields for photo aperture, ISO, and exposure time.

  • Camcorder profiles

    New hasProfile() method and several video quality profiles (such as QUALITY_1080P, QUALITY_720P, QUALITY_CIF, and others) allow you to determine camcorder quality options.

  • Digital media file transfer

    The platform includes built-in support for Media/Picture Transfer Protocol (MTP/PTP) over USB, which lets users easily transfer any type of media files between devices and to a host computer. Developers can build on this support, creating applications that let users create or manage rich media files that they may want to transfer or share across devices.

  • Digital rights management (DRM)

    New extensible digital rights management (DRM) framework for checking and enforcing digital rights. It's implemented in two architectural layers:

    • A DRM framework API, which is exposed to applications and runs through the Dalvik VM for standard applications.
    • A native code DRM manager that implements the framework API and exposes an interface for DRM plug-ins to handle rights management and decryption for various DRM schemes.

    For application developers, the framework offers an abstract, unified API that simplifies the management of protected content. The API hides the complexity of DRM operations and allows a consistent operation mode for both protected and unprotected content, and across a variety of DRM schemes.

    For device manufacturers, content owners, and Internet digital media providers the DRM framework?s plugin API provides a means of adding support for a DRM scheme of choice into the Android system, for secure enforcement of content protection.

    The preview release does not provide any native DRM plug-ins for checking and enforcing digital rights. However, device manufacturers may ship DRM plug-ins with their devices.

    You can find all of the DRM APIs in the android.drm package.

Keyboard support

  • Support for Control, Meta, Caps Lock, Num Lock and Scroll Lock modifiers. For more information, see META_CTRL_ON and related fields.
  • Support for full desktop-style keyboards, including support for keys such as Escape, Home, End, Delete and others. You can determine whether key events are coming from a full keyboard by querying getKeyboardType() and checking for KeyCharacterMap.FULL
  • TextView now supports keyboard-based cut, copy, paste, and select-all, using the key combinations Ctrl+X, Ctrl+C, Ctrl+V, and Ctrl+A. It also supports PageUp/PageDown, Home/End, and keyboard-based text selection.
  • KeyEvent adds several new methods to make it easier to check the key modifier state correctly and consistently. See hasModifiers(int), hasNoModifiers(), metaStateHasModifiers(), metaStateHasNoModifiers().
  • Applications can implement custom keyboard shortcuts by subclassing Activity, Dialog, or View and implementing onKeyShortcut(). The framework calls this method whenever a key is combined with Ctrl key. When creating an Options Menu, you can register keyboard shortcuts by setting either the android:alphabeticShortcut or android:numericShortcut attribute for each <item> element (or with setShortcut()).
  • Android 3.0 includes a new "virtual keyboard" device with the id KeyCharacterMap.VIRTUAL_KEYBOARD. The virtual keyboard has a desktop-style US key map which is useful for synthesizing key events for testing input.

Split touch events

Previously, only a single view could accept touch events at one time. Android 3.0 adds support for splitting touch events across views and even windows, so different views can accept simultaneous touch events.

Split touch events is enabled by default when an application targets Android 3.0. That is, when the application has set either the android:minSdkVersion or android:targetSdkVersion attribute's value to "11".

However, the following properties allow you to disable split touch events across views inside specific view groups and across windows.

  • The android:splitMotionEvents attribute for view groups allows you to disable split touch events that occur between child views in a layout. For example:
    <LinearLayout android:splitMotionEvents="false" ... >
        ...
    </LinearLayout>
    

    This way, child views in the linear layout cannot split touch events—only one view can receive touch events at a time.

  • The android:windowEnableSplitTouch style property allows you to disable split touch events across windows, by applying it to a theme for the activity or entire application. For example:
    <style name="NoSplitMotionEvents" parent="android:Theme.Holo">
        <item name="android:windowEnableSplitTouch">false</item>
        ...
    </style>
    

    When this theme is applied to an <activity> or <application>, only touch events within the current activity window are accepted. For example, by disabling split touch events across windows, the system bar cannot receive touch events at the same time as the activity. This does not affect whether views inside the activity can split touch events—by default, the activity can still split touch events across views.

    For more information about creating a theme, read Applying Styles and Themes.

WebKit

  • New WebViewFragment class to create a fragment composed of a WebView.
  • New WebSettings methods:
    • setDisplayZoomControls() allows you to hide the on-screen zoom controls while still allowing the user to zoom with finger gestures (setBuiltInZoomControls() must be set true).
    • New WebSettings method, setEnableSmoothTransition(), allows you to enable smooth transitions when panning and zooming. When enabled, WebView will choose a solution to maximize the performance (for example, the WebView's content may not update during the transition).
  • New WebView methods:
    • onPause() callback, to pause any processing associated with the WebView when it becomes hidden. This is useful to reduce unnecessary CPU or network traffic when the WebView is not in the foreground.
    • onResume() callback, to resume processing associated with the WebView, which was paused during onPause().
    • saveWebArchive() allows you to save the current view as a web archive on the device.
    • showFindDialog() initiates a text search in the current view.

Browser

The Browser application adds the following features to support web applications:

  • Media capture

    As defined by the HTML Media Capture specification, the Browser allows web applications to access audio, image and video capture capabilities of the device. For example, the following HTML provides an input for the user to capture a photo to upload:

    <input type="file" accept="image/*;capture=camera" />
    

    Or by excluding the capture=camera parameter, the user can choose to either capture a new image with the camera or select one from the device (such as from the Gallery application).

  • Device Orientation

    As defined by the Device Orientation Event specification, the Browser allows web applications to listen to DOM events that provide information about the physical orientation and motion of the device.

    The device orientation is expressed with the x, y, and z axes, in degrees and motion is expressed with acceleration and rotation rate data. A web page can register for orientation events by calling window.addEventListener with event type "deviceorientation" and register for motion events by registering the "devicemotion" event type.

  • CSS 3D Transforms

    As defined by the CSS 3D Transform Module specification, the Browser allows elements rendered by CSS to be transformed in three dimensions.

JSON utilities

New classes, JsonReader and JsonWriter, help you read and write JSON streams. The new APIs complement the org.json classes, which manipulate a document in memory.

You can create an instance of JsonReader by calling its constructor method and passing the InputStreamReader that feeds the JSON string. Then begin reading an object by calling beginObject(), read a key name with nextName(), read the value using methods respective to the type, such as nextString() and nextInt(), and continue doing so while hasNext() is true.

You can create an instance of JsonWriter by calling its constructor and passing the appropriate OutputStreamWriter. Then write the JSON data in a manner similar to the reader, using name() to add a property name and an appropriate value() method to add the respective value.

These classes are strict by default. The setLenient() method in each class configures them to be more liberal in what they accept. This lenient parse mode is also compatible with the org.json's default parser.

New feature constants

The <uses-feature> manfest element should be used to inform external entities (such as Google Play) of the set of hardware and software features on which your application depends. In this release, Android adds the following new constants that applications can declare with this element:

  • "android.hardware.faketouch"

    When declared, this indicates that the application is compatible with a device that offers an emulated touchscreen (or better). A device that offers an emulated touchscreen provides a user input system that can emulate a subset of touchscreen capabilities. An example of such an input system is a mouse or remote control that drives an on-screen cursor. Such input systems support basic touch events like click down, click up, and drag. However, more complicated input types (such as gestures, flings, etc.) may be more difficult or impossible on faketouch devices (and multitouch gestures are definitely not possible).

    If your application does not require complicated gestures and you do not want your application filtered from devices with an emulated touchscreen, you should declare "android.hardware.faketouch" with a <uses-feature> element. This way, your application will be available to the greatest number of device types, including those that provide only an emulated touchscreen input.

    All devices that include a touchscreen also support "android.hardware.faketouch", because touchscreen capabilities are a superset of faketouch capabilities. Thus, unless you actually require a touchscreen, you should add a <uses-feature> element for faketouch.

New permissions

  • "android.permission.BIND_REMOTEVIEWS"

    This must be declared as a required permission in the <service> manifest element for an implementation of RemoteViewsService. For example, when creating an App Widget that uses RemoteViewsService to populate a collection view, the manifest entry may look like this:

    <service android:name=".widget.WidgetService"
        android:exported="false"
        android:permission="android.permission.BIND_REMOTEVIEWS" />
    

New platform technologies

  • Storage
    • ext4 file system support to enable onboard eMMC storage.
    • FUSE file system to support MTP devices.
    • USB host mode support to support keyboards and USB hubs.
    • Support for MTP/PTP
  • Linux Kernel
    • Upgraded to 2.6.36
  • Dalvik VM
    • New code to support and optimize for SMP
    • Various improvements to the JIT infrastructure
    • Garbage collector improvements:
      • Tuned for SMP
      • Support for larger heap sizes
      • Unified handling for bitmaps and byte buffers
  • Dalvik Core Libraries
    • New, much faster implementation of NIO (modern I/O library)
    • Improved exception messages
    • Correctness and performance fixes throughout

API differences report

For a detailed view of all API changes in Android 3.0 (API Level 11), see the API Differences Report.

API Level

The Android 3.0 platform delivers an updated version of the framework API. The Android 3.0 API is assigned an integer identifier — 11 — that is stored in the system itself. This identifier, called the "API Level", allows the system to correctly determine whether an application is compatible with the system, prior to installing the application.

To use APIs introduced in Android 3.0 in your application, you need compile the application against the Android library that is provided in the Android 3.0 SDK platform. Depending on your needs, you might also need to add an android:minSdkVersion="11" attribute to the <uses-sdk> element in the application's manifest. If your application is designed to run only on Android 2.3 and higher, declaring the attribute prevents the application from being installed on earlier versions of the platform.

For more information about how to use API Level, see the API Levels document.

Built-in Applications

The system image included in the downloadable platform provides these built-in applications:

  • API Demos
  • Browser
  • Calculator
  • Camera
  • Clock
  • Contacts
  • Custom Locale
  • Dev Tools
  • Downloads
  • Email
  • Gallery
  • Gestures Builder
  • Messaging
  • Music
  • Search
  • Settings
  • Spare Parts
  • Speech Recorder
  • Widget Preview

Locales

The system image included in the downloadable SDK platform provides a variety of built-in locales. In some cases, region-specific strings are available for the locales. In other cases, a default version of the language is used. The languages that are available in the Android 3.0 system image are listed below (with language_country/region locale descriptor).

  • Arabic, Egypt (ar_EG)
  • Arabic, Israel (ar_IL)
  • Bulgarian, Bulgaria (bg_BG)
  • Catalan, Spain (ca_ES)
  • Czech, Czech Republic (cs_CZ)
  • Danish, Denmark(da_DK)
  • German, Austria (de_AT)
  • German, Switzerland (de_CH)
  • German, Germany (de_DE)
  • German, Liechtenstein (de_LI)
  • Greek, Greece (el_GR)
  • English, Australia (en_AU)
  • English, Canada (en_CA)
  • English, Britain (en_GB)
  • English, Ireland (en_IE)
  • English, India (en_IN)
  • English, New Zealand (en_NZ)
  • English, Singapore(en_SG)
  • English, US (en_US)
  • English, South Africa (en_ZA)
  • Spanish (es_ES)
  • Spanish, US (es_US)
  • Finnish, Finland (fi_FI)
  • French, Belgium (fr_BE)
  • French, Canada (fr_CA)
  • French, Switzerland (fr_CH)
  • French, France (fr_FR)
  • Hebrew, Israel (he_IL)
  • Hindi, India (hi_IN)
  • Croatian, Croatia (hr_HR)
  • Hungarian, Hungary (hu_HU)
  • Indonesian, Indonesia (id_ID)
  • Italian, Switzerland (it_CH)
  • Italian, Italy (it_IT)
  • Japanese (ja_JP)
  • Korean (ko_KR)
  • Lithuanian, Lithuania (lt_LT)
  • Latvian, Latvia (lv_LV)
  • Norwegian bokmål, Norway (nb_NO)
  • Dutch, Belgium (nl_BE)
  • Dutch, Netherlands (nl_NL)
  • Polish (pl_PL)
  • Portuguese, Brazil (pt_BR)
  • Portuguese, Portugal (pt_PT)
  • Romanian, Romania (ro_RO)
  • Russian (ru_RU)
  • Slovak, Slovakia (sk_SK)
  • Slovenian, Slovenia (sl_SI)
  • Serbian (sr_RS)
  • Swedish, Sweden (sv_SE)
  • Thai, Thailand (th_TH)
  • Tagalog, Philippines (tl_PH)
  • Turkish, Turkey (tr_TR)
  • Ukrainian, Ukraine (uk_UA)
  • Vietnamese, Vietnam (vi_VN)
  • Chinese, PRC (zh_CN)
  • Chinese, Taiwan (zh_TW)
  • Note: The Android platform may support more locales than are included in the SDK system image. All of the supported locales are available in the Android Open Source Project.

    Emulator Skins

    The downloadable platform includes the following emulator skin:

    • WXGA (1280x800, medium density, xlarge screen)

    For more information about how to develop an application that displays and functions properly on all Android-powered devices, see Supporting Multiple Screens.

    ↑ Go to top