IEPG Meeting, Sunday July 14, 10am onward
(Caution: my notes, my typos, my intepretations! - Geoff Huston)
This effort is a Keio University / WIDE research project as a showcase of Internet technologies with mobile IPv6 networking. The project's objectives are to demonstrate the potential role of the Internet in other industry sectors, in this case the automotive industry and in various applications that relate to vehicle traffic control.
The project's assumed environment for Internet-enabled cars is a mix of 802.11, cellular infrastructure and Dedicated Short Range Communications (DSRC) for hot points (such as toll both local radio). The target is a sea,less IP environment as the car roams across these connectivity environments.
There are 3 connectivity models that were evulated within the scope of the project: the single in-vehicle computer, an in-vehicle routers with multiple LAN devices in the car (possibly envisaging the the car's wiring loom as a LAN with multiple sensors and devices attached), and the multi-router model in the car (a more complex model with internal router and a external mobile router). (The multi-router model is currently under development.) I like the diagram of a lan-connected car key in the presentation!
The current project runs about 60 cars in the Yokohama area.
The single computer model assumes each device roams in the mobile network independantly. This uses mobile IPv6 with some extensions and some effort to undertake route optimization. The project evaluated a potential tunnelling solution, but this was avoided in favour of full implementaion using mobile IPv6.
In matching the various car models to the mobility model, the mobility aspects include consideration of the differences between mobile host vs mobile network.
Interface management uses an "Interface management" model to control dyanmic V6 tunnelling over the available transports. There is a layer 2 trigger mechanism for the V6 tunnel interface manager to select the "best" available V4 transport interface (best is a price / quality algorithm). The interface management system also suports native V6 over 802.11 with integrated L2 signalling back to the interface manager.Dedicated Short Range Communication (DSRC) - the traffic management sector is attempting to use 5Ghz as the carrier frequency, as is 802.11. There is work on developing IP over PPP over DSRC and also an Ethernet-type broadcast DSRC.www.IPCar.org), which has a web interface that can probe traffic and weather information as a live update from the cars, and InternetITS (InternetITS.org), that used 1570 cars in a Nagoya testbed.
Applications suggested include monitoring for congested streets with live updates as a cheaper method than using in-road sensor systems. Also signalling the car's windscreen wiper activity rate and the current location back to a collector to indicate wet weather. For taxis, incidence of local showers can be used to show likely concentrations of passengers.There is some potential for further work on the minimum number of cars (sample points) to obtain reliable environmental information.
- New IPv6 policy implemented on 1 July (Ripe and APNIC) and coming month (ARIN)
- registration transfer project - to transfer historic allocations to the appropriate "home" RIR, coordination and documentation underway
- Staff exchange between RIRs
- Presentation on RIR industry self-regulatory profile to various industry sectors and bodies
- coordinated training efforts
Emerging RIR support:
- LACNIC - working transition with ARIN
- AFRINIC - woring transition with RIPE
- Secretariat moves to APNIC for 2003
- ASO GA to be hosted by ARIN in 2003
Open Policy meetings
- APNIC Japan Sept
- ARIN (+NANOG) Oregon Oct
- RIPE Greece Sept
- unallocated pool is at 37% of the total address space
- note reducing allocation rate in 2002 compared to 1999 - 2001 trends
- by country breakdown shows JP, CN, KR US and AU as the largest national domains for allocations
- ARIN greatest in 2002
- AS assignments also shows a decreasing rate
- APNIC and RIPE have allocated 125 blocks - ARIN has allocated 29 historically
- 2002 shows continued growth
- by country: JP, US, DE and KR as largest national entities
- whois v3 migration (Mid August) test at v3.whois.net
- irr at irr.apnic.net and will migrate to the whois db
- delegations v6 /21 and v4 221/8
- MyAPNIC integrated frontend myapnic.net
- new DB to be released in Aug with training underway
- LACNIC transition support - final apporval of LACNIC antipcated in November 2002
- V6 /23
- new V4 policy documentation
- secure member service web portal undcer development
- RIPE NCC Member survey being conducted
- RIPE RIS has a US presence at MAE-WEST
- phased out Mail-Fropm to MD5-PW authentations
- routing registry full prototype and IRR toolset developed - plans include reporting and interface enhancements
A look at the effects of the EBONE shutdown.
Background of a KPNQWEST build.
First shutdown commencing July 2 (EBONE), with next major shutdown planned for July 19
Used RIPE NCC one-way delay loss measurement network. Some 2000 paths had some EBONE implications
Use of TTM alarms for long and short term average deviation:
- The TTM alarms rose from a daily average of 400/day to ~650 on July 2
- The RIPE hourly data shows the work progres of the shutdown
Look of individual paths for connection loss and rerouting
- some identication of path loss in the data for some single-homed EBONE downstream nets
- some identification of re-routing with subsequent re-engineering to a stable path
- identification of restoration to longer latency paths
- visibility of restoration to lower capacity switching equipment with higher loss and jitter due to overload on the switching infrastructure
- use of a superior backup path
Summary - the internet surivived quite comfortably, with some assertion that it is not as good as before (although the mapping of volume to paths is not availble)
Observation of may unanswered DNS queries and 8.2.3 bug for lame delegation being exercised
The question is wether the DNS is being operated correctly - or how much is bad out there!
JPNIC set up a DNSQC TF intended to improve the DNS quality
JP has about 370,00 delegations with 800000 NS records in the .jp domain, with 2.16 NS records per delegation. This is scatteres across 72,000 DNS servicers. Initial survey conducted June 14 - 18 2002, using dig queries for:
- active monitoring and active messaging on errors
- passive diagnosis to allow others to check their DNS configs
- DNS server version
- lame delegation
- consistency on NS record set
Of the 72,000 servers:
- 5% had non resolvable names (no IP addresses)
- 12% of the servers were not accessible
Of the 800,000 NS records:
- 16.1% were lame
- 33% of .jp domain were lame (recent delegations)
- 10% of the hierarchical .jp domains (ac.jp etc) were lame
- only 65.5% of the NS sets were consistent
More testing of SOA vals, NS pointing to CNAME, etc being considered
Scan if in-addr.apra in the APNIC zones for 97 days
Found that an average of 20% to 30% have problems:
for 97 days:
- one or more NS not visible
- SOA mismatch
across the sample period:
- 10% - 15% are fully lame (no functional NS)
(note - only one probe point and routing inconsistencies may impact this number.)
- 33% of the domains are always viaible
- 43% are 99% available
- 11% are fully lame for the entire period
- 18% had intermittently lame
fully lame is flat lined across the period
partially lame values show higher levels of change day by day.
Histogram of lameness shows that there is a strong clustering of fully lame and slightly lame - no real incidence of 50% lameness over the measurement period.
Propose to nag-enable the in-arpa delegations. More consideration about what to do with persistent lame delegations (disable?)
Presentation of work undertaken by Andre Broido. There is leakage of RFC1918 spaces with requests to the root servers referencing 1918 address space (_rfc1918_.in-addr.arpa)
Over 1M hosts sending messages attempting to update private address records.
Illustration of diurnal patterns with strong peaks at clock hour ticks in each local timezone (3 US, 2 EUR, 3 Asia)
Guessing that large sites are not generating these updates, and instead these are smaller sites that appears to be business related (low weekend levels) that are not adequately managed at the firewall / net interface level for trapping out such bogus DNS requests.
Lameness is defined as
MIGHT be lame when a server has multi IP addresses only some of which respond. MIGHT be lame when non-authoritative (i.e. recursive) response.
- a NS domain name server name that has no glue / IP address OR
- does not respond to DNS queries (greyer) OR
- responds with negative no such domain
This effort is targetted at ARIN reverse map delegations.
Some servers have both good and lame zones.
- servers / zone avg 2.37
- addrs per zone avg 2.32
- zones with NO IP addrs 3,000
- zones with 1 IP addr 7,300
16% of all the zone servers are non-responsive for all zones.
30% of servers just not working is pretty consistent with other studies of DNS Lameness
The test code takes some 11 hours to run! (has to ask every server for every zone) UDP congestion control would help!
For the reports, see www.apnic.net/stats/bgp.
- search for 192/8 space for unassigned addresses (working on a more accurate answer of the size of the unallocated pool)
- count of advertised prefixes that are smaller than actual registry allocations cleaned up
- unique prefixes
- maximal aggregation analysis
(bgp.potaroo.net also has a set of reports on this,including analysis of route-views data)
A look at the growing level of complexity within Internet Platforms and its implications (draft-ymbk-draft-guidelines-00.txt)
The argument is that complexity is the primary input that impedes scaling and generates additional overhead.
The principles used here are amplification and coupling to suggest that large scale complex systems are detrimental to quality outcomes and that system reliability and scaleability are not natural outcomes of excessive complexity and single-minded attention limited exclusively to component reliability.