Research and Advances
Architecture and Hardware Contributed articles

Data Acquisition in Vehicular Ad Hoc Networks

The data comes from multiple optimal sources in parallel, helping reduce addressing and data-acquisition latency.
Posted
  1. Introduction
  2. Key Insights
  3. Architecture
  4. Addressing
  5. Data Acquisition
  6. Performance Evaluation
  7. Conclusion
  8. Acknowledgment
  9. References
  10. Author
Data Acquisition in Vehicular Ad Hoc Networks, illustration

With the amount of multimedia data large and growing larger, low-latency data acquisition represents an important practical goal for emerging Internet of Vehicles applications. Multihoming could help reduce such latency because it could let a single node use multiple addresses to acquire data in parallel. Network researchers are thus trying to extend multihoming to vehicular ad hoc networks (VANETs), aiming to reduce latency in the Internet of Vehicles. But in VANETs with multihoming, a vehicle must be able to perform n addressing processes to be configured with addresses with n global network prefixes (GNPs). And getting a vehicle to use addresses with different GNPs to acquire data in parallel through the standard communication models is a significant engineering challenge. Here, I propose an address-separation mechanism so vehicles can be configured with addresses with different GNPs in a single addressing process, extending the k-anycast model to help acquire data in parallel.

Back to Top

Key Insights

  • Because there is so much multimedia data, low-latency data acquisition could help ensure vehicles get what they need.
  • Acquiring multimedia data through vehicular ad hoc networks helps deliver the data to networked vehicles.
  • • The k-anycast model can be extended to vehicular ad hoc networks so they can acquire data in parallel and help reduce latency in data acquisition.

Vehicles on the road today include abundant computer processing and storage, producing demand for connecting VANETs to the Internet so they can acquire a variety of multimedia data.1,6,12 In multihoming, one IP domain is identified by n (n ≥ 2) GNPs, and a node can be configured with n addresses with different GNPs.7 A node would use these addresses to acquire data in parallel, thus reducing data-acquisition latency.3, 4, 5 However, network researchers trying to extend multihoming to VANETs must first address two main technical challenges:3

Addressing. A node usually performs an addressing process with addresses including one GNP;2 that is, a node must perform n addressing processes to be configured with addresses with n GNPs, leading to considerable addressing latency; and

Data acquisition. In unicast and any-cast models, a node acquires data from a single provider. In multicasting, a multicast address works only as a destination address. Each destination multicast member receives a copy of data from a particular source, so a multicast member actually acquires data from a single provider. A node cannot use addresses with different GNPs to acquire data from providers in parallel via uni-cast, anycast, or multicast.

Wang9 proposed a k-anycast communication model in the IPv6 network with one GNP. In the k-anycast model,9 one k-anycast group consists of k-anycast members that cooperate to provide data in parallel; that is, a user can acquire data from more than one member in parallel, greatly reducing data-acquisition latency. Due to the efficiency of the k-anycast model, vehicle-network researchers are looking to take advantage of the k-anycast idea. Addressing and data-acquisition latency can thus be reduced through multihoming and the k-anycast model. Based on my proposed architecture for VANET with multihoming, a vehicle can be configured with addresses with different GNPs through a single addressing process, substantially reducing addressing latency. Also based on my proposed architecture, the k-anycast model can be extended through a single GNP9 to VANET with multiple GNPs so the vehicle would use addresses with multiple GNPs to acquire data from different k-anycast members in parallel, thus reducing data-acquisition latency.

There are two main differences between the data-acquisition mechanism I propose and the one suggested by the k-anycast mechanism:9

Multiple GNPs. The data-acquisition mechanism based on the i-anycast model9 works in the IPv6 network with a single GNP, whereas the one I propose works in VANETs with multiple GNPs; and

A single GNP. In Wang,9 the optimal k-anycast members that cooperate to provide data are selected based on one GNP, whereas the optimal k-anycast members are chosen based on multiple GNPs.

Anycast is different from k-anycast. In anycast, a node acquires data from one anycast member, whereas in k-anycast, a node acquires data from more than one k-anycast member in parallel.

Back to Top

Architecture

In DAVM, a VANET consists of access points (APs) and vehicles and is connected to the Internet through access routers (ARs). The lanes enclosed by px (px ≥ 2) APs construct a vehicular multihoming domain (VMD) Mx, and each AP is denoted by APx−y (1 ≤ ypx). APx−y links with rx−y (1 ≤ rx−y) AR(s) denoted by ARx−y−z (1 ≤ zrx−y) that identifies one GNP denoted by GNPx−y−z. Mx can thus be defined by GNP set Gx, as shown in Equation (1). One vehicle in Mx can use each GNP in Gx to construct an IPv6 address and utilize the addresses with different GNPs to acquire data from providers in parallel. As shown in Figure 1, two VMDs, M1 and M2, are included. The lanes enclosed by APx−y (x = 1, 1 ≤ y ≤ 4) form VMD M1, which is defined by GNP set G1, and the lanes enclosed by APx−y (x = 2, 1 ≤ y ≤ 4) construct VMD M2, which is defined by GNP set G2.

f1.jpg
Figure 1. Architecture.

eq01.gif

In DAVM, one VMD is defined by a GNP set and the VMD-based architecture yields two main benefits:

Address separation. A single VMD can help achieve the proposed address-separation mechanism in VANET through multihoming to help reduce addressing latency. In the address-separation mechanism, a vehicle in one VMD is configured with a globally unique node ID through a single addressing process, then combines the node ID with each GNP in the GNP set defining the VMD to construct globally unique addresses with different GNPs. A vehicle can thus be configured with addresses with different GNPs through a single addressing process; and

In parallel. A VMD can help achieve the k-anycast model in the VANET with multiple GNPs to reduce data-acquisition latency; that is, a vehicle in one VMD can use the addresses with different GNPs to acquire data from different optimal k-anycast members in parallel.

Back to Top

Addressing

In order to reduce the address-configuration latency in VANET with multiple GNPs, I propose address separation as a way to achieve the addressing, whereby a vehicle in Mx performs only one addressing process to be configured with a globally unique node ID and is uniquely identified through this node ID during its lifetime. The vehicle then combines its node ID with each GNP in Gx to construct globally unique addresses with different GNPs. A vehicle can thus be configured with addresses with different GNPs through a single addressing process.

Node ID space. If a node ID is w-bits long (w is a positive integer) and the number of APs is 2a (1 ≤ a <w−1, a is a positive integer), then the node ID space [1, 2w−2] is divided into 2a parts, with each part for one AP. The mth (1 ≤ m ≤ 2a) AP’s node ID A(m) is shown in Equation (2), and the mth AP’s node ID space [L(m), U(m)] is shown in Equation (3) and Equation (4). Each AP thus has a unique node ID and maintains its globally unique node ID space.

eq02.gif

eq03.gif

eq04.gif

GNP set. In DAVM, an AP stores the GNP sets defining the VMDs it belongs to. In Mx, APx−y can acquire GNPx−y−z by receiving a router advertisement from ARx−y−z. When APx−y acquires the GNP set x−y, as shown in Equation (5), it then performs the three operations to acquire Gx:

eq05.gif

Broadcasts. APx−y sets Gx to Gx−y and broadcasts one Neig-AP message where the payload is Gx−y−z.

Performs operations. Following receipt of the Neig-AP, a vehicle or AP performs acquisition operations based on three cases:

Case 1. A vehicle outside an AP’s communication range receives the Neig-AP, and the vehicle forwards the Neig-AP and repeats the operation;

Case 2. A vehicle within an AP’s communication range receives the Neig-AP, then updates the destination address in the Neig-AP with the address of the AP, forwards the Neig-AP, and repeats the operation; and

Case 3. An AP receives the Neig-AP, then updates Gx by performing the union operation, as shown in Equation (6).

eq06.gif

Ends. The process ends, as shown in Figure 2.

f2.jpg
Figure 2. Addressing.

In this data-acquisition process, APx−y might employ a positioning method10 to determine the VMD one Neig-AP comes from. As shown in Figure 2, AP1−1 in VMD M1 receives one Neig-AP from APx−y (x=1, 2 ≤ y ≤ 4) and establishes GNP set G1 defining M1. Likewise, APx−y (x = 1, 2 ≤ y ≤ 4) also acquires G1 by receiving one Neig-AP. In this way, APx−y (x = 1, 1 ≤ y ≤ 4) acquires G1.

Address construction. A vehicle V1 in Mx that begins to move uses a hardware ID (such as a media-access control, or MAC, address) as a temporary address and acquires a node ID from the nearest AP APx−y based on the following three-step process:

Sends. V1 sends one N-Req message to APx−y;

Marks the node. The APx−y receiving the N-Req returns one N-Rep message where the payload includes the assigned node ID and GNP set Gx defining Mx, then marks the node ID as “assigned”; and

Sets its node. The V1 receiving the N-Rep sets its node ID to the node ID in the N-Rep and stores Gx, as shown in Figure 2.

Since APx−ys node ID space is globally unique, the node ID that APx−y assigns to V1 is also unique. A V1 configured with a globally unique node ID then combines its node ID with each GNP in Gx to acquire a globally unique IPv6 address. In Figure 2, V1 is located in M1, which is defined by G1. When V1 acquires a unique node ID from AP1−1, it then combines the node ID with each GNP in G1 to acquire seven unique IPv6 addresses.

Back to Top

Data Acquisition

In the k-anycast model with a single GNP,9 one k-anycast group consists of k-anycast members that cooperatively provide data in parallel. A user can thus acquire data from different k-anycast members in parallel, greatly reducing acquisition latency. In order to achieve the main DAVM objective of reduced data-acquisition latency, network researchers are trying to extend the k-anycast idea with one GNP9 to VANET with multihoming so vehicles are able to use addresses with different GNPs to acquire data from different optimal providers in parallel.

In DAVM, a single k-anycast address defines one type of content, and all providers able to provide that content construct a k-anycast group uniquely specified by the k-anycast address. The k-anycast address structure consists of w-bit k-anycast ID field and reserved field whose value is zero (see Table 1). In DAVM, a particular content C is divided into q (q ≥ 2) parts, with each part cu (qu ≥ 1) uniquely identified by part ID du, as shown in Equation (7). A vehicle would use a content address to achieve k-anycast communications, with a content address consisting of the k-anycast ID and part ID set (see Table 2). The k-anycast ID specifies the type of desired content, and the part ID set indicates the specific parts of the content.

t1.jpg
Table 1. k-anycast address.

t2.jpg
Table 2. Content address.

eq07.gif

Vehicle V1 is located in Mx, as defined by GNP set Gx, and acquires content C1 identified by k-anycast address K1 through the following five-step process:

Selects. V1 selects g (2 ≤ g ≤ |Gx|) GNPs from Gx to construct g addresses denoted by Si (1 ≤ ig) where the node ID is V1‘s node ID and then constructs g content addresses denoted by Di. In Di, the k-anycast ID is the same as the ID in K1, and the part ID set is Pi that defines the data parts Bi, as shown in Equation (8), where ci−j (1 ≤ j ≤ |Pi|) is the data part identified by element di−j in Pi. This way, Bi satisfies Equation (9);

eq08.gif

eq09.gif

Sends. V1 sends g data-request messages denoted by Ri in which the destination address is Di and the source address is Si;

Routs. Based on Si, Ri is routed to ARi, which specifies the GNP in Si. Based on Di, ARi routes Ri to the optimal k-anycast member Ai with k-anycast address K1. Based on Pi in Di, Ai returns a data-response message Ei whereby the destination address is Si and the payload is Bi;

Further routs. Based on the GNP in Si, Ei is routed to ARi and then, based on the node ID in Si, Ei is routed to V1; and

Receives data. V1 can thus receive g data-response messages from different k-anycast members in parallel, as shown in Figure 3.

f3.jpg
Figure 3. Data acquisition.

In Figure 3a, V1 is located in M1, which is defined by G1 and connects with Ri (1 ≤ i ≤ 4), and the k-anycast group includes four members, Ai, that provide content C1, as defined by k-anycast address K1. C1 is divided into fours parts, with each part defined by part ID di. V1 constructs four content addresses Di whereby the k-anycast ID is the same as the ID in K1, and the part ID set is Pi. V1 selects four GNPs, GNPi, to construct four addresses, Si. V1 then sends four data-request messages Ri in which the source address is Si and the destination address is Di. Based on Si, Ri reaches ARi, which routes Ri to the optimal k-anycast member Ai. Based on Pi in Di, Ai returns a data-response message Ei whereby the destination address is Si and the payload is the content parts, Bi, as defined by Pi. Based on the GNP in Si, Ei is routed to ARi. Based on the node ID in Si, Ei is routed to V1, which receives different parts of C1 from different k-anycast members in parallel.

Back to Top

Performance Evaluation

Following the earlier description of data acquisition, the addressing latency TA consists of the node ID request latency TA-Req and the node ID response latency TA-Rep, as shown in Equations (10), (11), and (12), in which b is the data rate, t is the delay in transmitting a bit between neighbors, tMax is the delay in transmitting a message with maximum size sMax between neighbors, l is the distance between a vehicle and the nearest AP, and sID-Req/sID-Rep is the size of an N-Req/N-Rep. Following the description of performance evaluation, the data-acquisition latency TC consists of the data-request latency TReq and data-response latency TRep, as shown in Equations (13), (14), and (15), in which li is the distance between ARi and the nearest k-anycast member, li is the distance between ARi and a particular vehicle, and sReq/sRep is the size of a data request/response message. The notations used in DAVM are listed in Table 3.

t3.jpg
Table 3. Notation.

eq10.gif

eq11.gif

eq12.gif

eq13.gif

eq14.gif

eq15.gif

DAVM is evaluated in ns-2 using the simulation parameters in Table 4 in which the number of k-anycast members is equal to g. DAVM is compared withaddressing standard2 and the data-acquisition scheme with a single GNP9 called Data Acquisition One GNP, or DAOGNP, as shown in Figure 4 and Figure 5.

t4.jpg
Table 4. Simulation parameters.

f4.jpg
Figure 4. Addressing latency.

f5.jpg
Figure 5. Data-acquisition latency.

As in Figure 4, with the increase in the number of addresses, the addressing latency in the standard increases, whereas the addressing latency in DAVM tends to be stable. Such stability follows from DAVM including an address-separation mechanism and a vehicle using a single addressing process configured with multiple addresses with different GNPs. As a result, addressing latency is only minimally affected by the number of addresses. In the standard, only a single addressing process is performed for each GNP, so the addressing latency grows with the number of addresses. As shown in Figure 5, with the increase in GNPs, the data-acquisition latency in both DAVM and DAOGNP decreases, but DAVM involves less data acquisition latency for two main reasons:

Based on multiple GNPs. In DAVM, the optimal k-anycast members that provide the data are selected based on multiple GNPs, whereas in DAOGNP, the optimal k-anycast member are selected based on a single GNP; and

Single address. In DAVM, a vehicle uses addresses with different GNPs to acquire data from optimal k-anycast members through different routing paths in parallel, whereas in DAOG-NP, a node uses a single address with a single GNP to acquire data from relatively optimal k-anycast members.

Back to Top

Conclusion

My observation that multihoming and k-anycast can help lower data-acquisition latency to extend multihoming and k-anycast to VANET is what led me to propose DAVM as a way to reduce data-acquisition latency. My results show DAVM works for three main reasons:

  • Extends multihoming to VANET;
  • Extends the k-anycast idea with one GNP to VANET with multihoming so vehicles can use addresses with different GNPs to acquire data from multiple k-anycast members in parallel; and
  • Provides the address-separation mechanism so vehicles can obtain multiple addresses with different GNPs through a single addressing process.

My future work will aim to take advantage of the powerful computing capabilities and abundant storage resources of APs to help improve addressing and data acquisition in VANET with multihoming.

Back to Top

Acknowledgment

This work is supported by the 333 Project Foundation (grant number BRA2016438), CERNET Innovation Project (grant number NGII20170106), and National Natural Science Foundation of China (grant number 61202440).

Back to Top

Back to Top

    1. Amadeo, M., Campolo, C., and Molinaro, A. Information-centric networking for connected vehicles: A survey and future perspectives. IEEE Communications Magazine, 54, 2 (Feb. 2016), 98–104.

    2. Droms, R., Bound, J., Volz, B., Lemon, T., Perkins, C., and Carney, M. Dynamic Host Configuration Protocol for IPv6 (DHCPv6), RFC 3315. Internet Engineering Task Force, Fremont, CA, 2003; http://www.ietf.org/rfc/rfc3315.txt

    3. Gladisch, A., Daher, R., and Tavangarian, D. Survey on mobility and multihoming in future Internet. Wireless Personal Communications 74, 1 (Jan. 2014), 45–81.

    4. Islam, S., Hashim, A.H.A., Habaebi, M.H., and Hasan, M.K Design and implementation of a multihoming-based scheme to support mobility management in NEMO. Wireless Personal Communications 95, 2 (Feb. 2017), 457–473.

    5. Khatouni, A.S., Marsan, M.A., and Mellia, M. Video upload from public transport vehicles using multihomed systems. In Proceedings of the IEEE Conference on Computer Communications (San Francisco, CA, Apr. 10–14). IEEE Computer Society Press, 2016, 306–307.

    6. Omar, H., Zhuang, W., and Li, L. Gateway placement and packet routing for multihop in-vehicle Internet access. IEEE Transactions on Emerging Topics in Computing 3, 3 (Mar. 2015), 335–351.

    7. Troan, O., Miles, D., Matsushima, S., Okimoto, T., and Wing, D. IPv6 Multihoming Without Network Address Translation, RFC 7157. Internet Engineering Task Force, Fremont, CA, 2014; http://www.ietf.org/rfc/rfc7157.txt

    8. Vegni, A.M. and Loscri, V. A survey on vehicular social networks. IEEE Communications Surveys & Tutorials 17, 4 (Apr. 2014), 2397–2419.

    9. Wang, X. Analysis and design of a k-anycast communication model in IPv6. Computer Communications 31, 10 (Oct. 2008), 2071–2077.

    10. Wang, X. and Zhong, S. Research on IPv6 address configuration for a VANET. Journal of Parallel and Distributed Computing 73, 6 (June 2013), 757–766.

    11. Wang, X. and Zhu, X. Anycast-based content-centric MANET. IEEE Systems Journal PP, 99 (Nov. 2016), 1–9.

    12. Zheng, Z., Lu, Z., Sinha, P., and Kumar, S. Ensuring predictable contact opportunity for scalable vehicular Internet access on the go. IEEE/ACM Transactions on Networking 23, 3 (Mar. 2015), 768–781.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More