티스토리 수익 글 보기

티스토리 수익 글 보기

Jekyll2025-11-07T17:51:20-05:00https://americanexpress.io/feed.xmlAmerican Express TechnologyAmerican Express Technology Open Source and BlogHarnessing Gen AI to Power Restaurant Recommendations2025-11-07T00:00:00-05:002025-11-07T00:00:00-05:00https://americanexpress.io/harnessing-gen-ai-to-power-restaurant-recommendationsToday, we’re excited to announce the beta launch of our new American Express Dining Companion, a tool that uses generative AI (Gen AI) and traditional AI to help Card Members discover U.S. Resy restaurants. Available to eligible Card Members in the U.S. in the “Try New Features” section under the Account tab in the Amex app, Dining Companion streamlines finding and reserving a U.S. Resy restaurant.

This limited beta launch represents the next step in our use of AI to deliver intuitive and personalized experiences for Card Members and drive business to our merchant partners. Dining Companion brings together Amex’s proprietary technology, insights from Card Member spend history, an LLM, a personalized recommendation engine, and Resy’s API to make restaurant discovery easy. Card Members can simply submit prompts to explore available U.S. Resy restaurants that meet their culinary preferences, location, and other criteria, and then book their reservation directly through Resy.

A Natural Evolution in Our AI Strategy

Dining Companion builds on more than a decade of AI innovation at American Express. From fraud detection and credit risk models to Amex Chat and personalized offers, AI helps us deliver trusted and personalized services at scale. For example, we recently introduced Dining and Hotel Recommendations, which provides an AI-powered content feed of personalized dining and hotel suggestions to Card Members.

Building on Our Legacy of Innovation in Dining and Discovery

Dining Companion advances our commitment to transform the way Card Members dine out. Dining is one of the top passion areas for American Express® Card Members. It is our largest travel and entertainment spend category globally, with more than $87 billion spent on dining in the U.S. in 2024*. To fuel this passion, American Express created a dining ecosystem that connects them with exceptional restaurants and special experiences through curated programs and platforms like Resy.

Within this dining ecosystem, we continue innovating to help our Card Members discover new experiences. For example, Resy users can access editorial content like Hit Lists and guides, and now with Dining Companion, eligible Card Members can also easily discover U.S. Resy restaurants through the Amex app using conversational AI.

Designed with Trust, Security, and Responsible AI Principles

Every Amex innovation starts with a commitment to trust, security, and service — and that remains central to our AI products and programs. In recent years, we’ve expanded our use of responsible AI with the help of our AI Enablement Layer, a key component of Amex’s enterprise technology infrastructure that unites data, security, and compliance within a single framework. By embedding controls and risk management into every stage of AI development, this layer provides the trusted foundation that allows teams to innovate confidently. It serves as the backbone for initiatives like our Gen AI use cases in travel and servicing and now Dining Companion.

In addition to being built on our AI Enablement Layer, Dining Companion is guided by the following principles:

  • Architecture: Privacy-first and built to scale.

  • Human oversight: Observability to validate model output and maintain reliability while driving continuous improvement.

  • Responsible AI: Built-in safety mechanisms to mitigate bias, reduce hallucinations, and ensure traceability.

What’s Next

The Dining Companion beta is currently available as a limited release to eligible Card Members, with enrollment capped during the initial phase. We’re learning and iterating, with our teams closely analyzing user interactions to refine features such as conversational flow, recommendation relevance, tone, and pacing. We’re looking forward to evolving the product with these learnings and roll out Dining Companion to more U.S. Card Members.

** Based on American Express proprietary data from January-December 2024. Data refers to all U.S. American Express® Card Members.*

]]>
How Gen AI is Elevating Human Connection at American Express2025-09-24T00:00:00-04:002025-09-24T00:00:00-04:00https://americanexpress.io/how-gen-ai-is-elevating-the-human-connection-at-american-expressAt American Express, relationships are at the heart of how we deliver world class service excellence. Our Customer Care Professionals and Travel Counselors do more than just service requests and or book trips; they offer guidance, earn trust, and help create memorable experiences. Now with Generative AI (Gen AI) we’re amplifying human connections and empowering our workforce in many ways.

Gen AI Empowering our People

In our role as engineers, we often talk about scale, speed, and performance. But the real magic happens when those things translate into something more human-like helping a colleague deliver faster, more thoughtful service. It lets our colleagues shine, while our customers feel truly seen and cared for. That value we deliver to our colleagues and customers through impactful use cases drives our work on Gen AI at American Express.

Use Case #1: Travel Counselor Assist – Real-Time AI Support on the Front Lines

One of the areas where we first introduced Gen AI was in servicing, integrating real-time intelligence directly within the workflow. This advancement empowers our Travel Counselors to access relevant suggestions and context-aware insights instantly as they engage with customers. The tool leverages large language models and real time data to deliver high quality recommendations in the moment, reducing manual search, while enhancing support and personalization and delighting Card Members.

We are seeing clear benefits with 88% of Travel Counselors globally reporting high satisfaction with the tool. Travel Counselors also shared that the tool simplifies their workflow and gives them the confidence and expertise to serve Card Members better than ever.

Use Case #2: Colleague Help Center Knowledge Assist – A Smarter Way to Support Our People

The second use case we’re excited about is the Knowledge Assist tool, a Gen AI-powered chatbot delivering real-time relevant information to help our colleagues resolve customer inquiries faster and more accurately. Instead of manually searching through static articles, our frontline colleagues can now ask questions and receive direct, AI-generated answers pulled from approved content. This allows them to stay fully focused on what matters most – the customer.

With Colleague Help Center Knowledge Assist, we’re enhancing both colleague and customer satisfaction due to faster, more accurate resolutions which lead to fewer call-backs. In fact, through our work reformatting thousands of documents for accuracy we’re seeing a 98% accuracy rate for AI-generated answers in the US (as of May ‘25).

Engineering Excellence: Building Responsibly at Scale

Behind the scenes, our engineers have built robust, enterprise-ready pipelines that balance innovation with safety. We strive to ensure our Gen AI tools are low-latency, highly reliable and meet rigorous compliance and performance standards.

Key principles in our approach:

  • Architecture: Built to scale and meet compliance requirements.

  • Observability: Deep monitoring, feedback loops, and appropriate human oversight drive continuous improvement.

  • Responsible AI: Every model is evaluated through a governed framework to ensure fairness and traceability.

Our commitment? Never compromise on trust, service, or security, because our teams depend on us to get it right every time.

This thoughtful approach ensures we maintain our brand promise of trust, security and service because when our Customer Care Professionals and Travel Counselors depend on these tools in real time, there’s no room for error.

Why We Do This

The true reward isn’t just metrics or technical wins. It’s in the moments we sit with a colleague, whether a Travel Counselor or Customer Care Professional, and watch them use these tools, see them work with ease, and effortlessly navigate the customer’s needs without friction. It’s technically challenging, but immensely rewarding when a colleague says, “This saved me five clicks and got me back to my customer faster.” That’s when we know we’ve built something that truly matters.

Final Thought: Relationship-Powered. Tech-Enabled.

We’re building platforms that let our colleagues focus on what matters most – the customer.

Our approach is relationship-powered and tech-enabled. We’re not here to automate empathy; we’re here to clear the path for it. Gen AI helps remove repetitive tasks, the delays, and the noise so our colleagues can be responsive and human for every customer, every time.

At American Express, we’re proud to be relationship-powered and tech-enabled. It’s not just the future—it’s the present we’re creating every day.

]]>
Empowering Innovation: The Python Paved Road2025-08-27T00:00:00-04:002025-08-27T00:00:00-04:00https://americanexpress.io/empowering-innovation-the-python-paved-roadWhile the idea of “paved roads” in technology is widely recognized, American Express has reimagined this concept to serve its unique ecosystem of engineers. At its core, a paved road represents a thoughtfully crafted pathway that removes unnecessary friction, empowers teams to deliver with greater confidence, and accelerates innovation by promoting best practices.

At American Express, these paved roads are far more than just technical guidelines; they are the backbone of a community-driven culture that values collaboration, transparency, and continuous improvement. By streamlining the journey from development to production, Amex ensures that every engineer, whether new or seasoned, has access to the resources, templates, and support needed to build, deploy, and maintain world-class software. This intentional approach not only heightens productivity but also fosters a spirit of shared ownership and excellence across the organization.

Codifying Best Practices: The Birth of the Amex Way

In 2020, the Developer Experience team formalized software development at Amex with the “Building Software the Amex Way” initiative. Over 500 engineers contributed to the Amex Way Library, forming a collaborative repository of architecture patterns, CI/CD pipelines, and community tools. This resource empowered teams to navigate the engineering landscape with clarity and consistency. With the Amex Way, we started our journey to formalizing the way we constantly iterate, test and learn, and socialize best practices across our community.

Activating The Developer Community

As the library evolved, engineers called for repeatable workflows and consistent deployment strategies. A couple of our Distinguished Engineers responded by launching community-driven paved roads, starting with the JVM and soon followed by Go, a process maintained by active JVM and Go Guilds. These templates, crafted “by devs for devs,” ensured relevance, rapid adoption, and robust community stewardship. The intentional approach established a backbone of collaboration and continuous improvement throughout the organization.

Transparency and Security Embedded in Every Step

Paved roads at Amex are built with transparency and security at their core. Technical Guilds collaborate openly with the community via public pull requests, allowing anyone to offer feedback. Security and compliance guardrails are natively embedded in CI/CD templates, allowing engineers to focus on customer innovation while upholding our rigorous safety standards. This allows every engineer, whether new or seasoned, access to resources, templates, and support for building world-class software.

Flexible Solutions for Diverse Needs

Each paved road combines a comprehensive playbook, a real-world template, and (for Go) a modular toolkit enabling customization for resiliency, security, and observability. For web development, the micro-frontend platform empowers teams to contribute, iterate, and roll out features for customer-facing websites without sacrificing user experience or consistency.

Python Paved Road: A Feat of Collaboration

In early 2024, the Python Guild activated a distributed team of 21 contributors across five time zones to define a uniquely Amex approach for Python. To meet the challenge of engagement and consensus, three collaborative working groups were formed, each comprised of a group of article writers, a senior engineer technical reviewer, and a technical writer. These working groups also convened in open Slack discussions and managed a repository of Architectural Decision Records (ADRs) to document rationale and ensure transparency. These efforts culminated in a one-of-a-kind paved road that accelerates innovation while promoting best practices.

Key Learnings from Building the Python Paved Road

  • Engage Within Existing Workflows: Balancing new practices with familiar routines minimizes cognitive load and accelerates adoption.

  • Distributed Collaboration Works with Clear Structure: Cross-timezone teams thrive with small, focused groups and open communication channels.

  • Transparency Builds Consensus: Public discussions and immutable documentation of decisions (ADRs) foster trust and community buy-in.

  • Focus on Fundamentals, Plan for Growth: Limiting initial scope to generic web service fundamentals enables future expansion to broader Python use cases like AI, data science, and cybersecurity.

Continuous Evolution: Launching and Scaling the Python Paved Road

Launched in January, the Python paved road quickly gained traction, drawing over 3,000 unique visitors to its guidebook. The next phase will see the introduction of practical templates, empowering teams to deploy production-ready services rapidly, guided by a one-touch/zero-touch deployment strategy. Quarterly updates will broaden coverage, supporting the evolving needs of a diverse engineering community and ensuring Amex continues to set the standard for developer experience and innovation.

]]>
Building Resilient Systems from the Customer’s Perspective2025-08-04T00:00:00-04:002025-08-04T00:00:00-04:00https://americanexpress.io/building-resilient-systems-from-the-customers-perspectiveAs customer expectations for fast, seamless, and always-available digital experiences continue to grow, it’s increasingly important to measure availability through the lens of the customer rather than individual applications. Traditional, more siloed approaches to measuring availability can fail to capture the full extent of how customers experience service disruptions or system latency. Resiliency is in the eyes of the user, so it’s important to orient observability around the customer experience and prioritize continuous improvement efforts geared toward the most critical customer journeys.

Historically, our teams followed the standard industry practice of instrumenting each system individually across its own technology stack. While this approach is useful for identifying and diagnosing issues that originate from within a single system, it struggles to capture inter-system dependencies or the full path of a customer journey. Under this standard approach, troubleshooting customer-reported issues often requires coordination across multiple teams and can lead to longer resolution times.

To shift toward a more customer-centric approach, we embarked on a journey to define, map, measure, and test end-to-end customer journeys. These journeys spanned multiple personas, including customers, merchants, and even internal colleagues. With this approach, the focus shifted from the success of an application to the success of the entire customer journey.

Our path to customer journey-oriented resiliency:

  1. Define: We started by identifying a set of critical customer journeys across the company, defined as intents, such as “I want to pay my bill” or “I want to apply for a card product.” We then further split these journeys into tiers of criticality to allow us to set availability targets accordingly.

  2. Map: This step was crucial to the overall accuracy and effectiveness of our journey approach and involved mapping each customer journey to the underlying applications, databases, third party systems and other system components that support it. These detailed journey topologies helped identify dependencies, risks, key internal stakeholders, and single points of failure, while also enabling better correlation of incidents and allowing us to detect issues before they impacted customers.

  3. Measure: Availability reporting became the cornerstone of our customer journey-centric approach. To ensure accurate end-to-end measurement, we adopted an enterprise-wide logging framework and standard telemetry to enable traceability and transaction correlation across systems. This new monitoring allows us to observe attempted transactions, out-of-pattern traffic volumes, and business exceptions, which enables us to more quickly identify potential customer impacts that might otherwise go unnoticed through regular down-time monitoring approaches.

  4. Test: While testing individual system failover was already a standard practice in our Disaster Recovery program, we expanded this requirement to test all systems linked to a single customer journey at the same time. These end-to-end tests involve failing over entire journeys, often comprised of dozens of interconnected systems, for multiple days. These exercises provide valuable insights into system resiliency, allowing us to uncover bottlenecks, hidden dependencies, and system latency that may not have been uncovered through isolated failover testing. By mimicking real-world scenarios, we are able to test whether an entire customer journey can withstand unexpected disruptions.

In making changes to support this customer-centric approach, it was critical to ensure that we brought stakeholders together from across the organization to support the program. We needed both top level leadership support as well as widespread buy-in from development and support teams, and we achieved this through constant collaboration and intentional expansion of customer journeys. What started with a handful of critical customer journeys has now expanded to over 65 journeys spanning every line of business.

In addition to achieving continuous year-over-year improvements to our customer journey availability, one of the most positive outcomes to this program was the cultural shift of bringing end-to-end teams together, creating shared goals, and driving value collectively. This cross-functional approach enabled best practice sharing, streamlined communication, and strengthened our overall resiliency posture beyond our defined journeys and into how we build and develop software every day.

]]>
Go at American Express Today: Seven Key Learnings2025-07-16T00:00:00-04:002025-07-16T00:00:00-04:00https://americanexpress.io/go-at-american-express-todaySince adopting Go in late 2016, our teams at American Express have seen the language live up to its promises of performance, efficiency, and scalability. But that impact didn’t come without learning curves.

Along the way, we uncovered key insights that have helped shape how we use Go across our platforms today. This post focuses on what we’ve learned and how those lessons continue to influence our engineering practices.

1. Evolving with Go Modules

One of the first friction points we encountered was around dependency management.

In the early days, Go’s dependency management was simplistic but very challenging to use within an enterprise network behind corporate firewalls and internet proxies.

The introduction of Go Modules changed that. Today, versioning and managing dependencies is far more straightforward—even for large, complex projects. It’s now common place to see Go support within most enterprise developer tools.

2. Concurrency: Easy to Start, Hard to Master

Go’s concurrency model was a major draw but putting it into production wasn’t without its headaches.

While it is easy to create goroutines, cleanup and coordination across goroutines was an area that initially presented some challenges.

These challenges pushed us to adopt stronger patterns for managing concurrent processes. We’ve embraced the sync & context packages along with ensuring the use of defer to avoid goroutine leaks.

These patterns now help us write more robust and maintainable Go code. And to ensure we catch concurrency bugs early, we enable Data Race Detection in our tests by default.

3. Training for Idiomatic Go

Despite Go’s relative simplicity, ramping up teams took effort.

As Go was a new language for many of our engineers, we invested in internal training programs and developed documentation that helps teams write idiomatic Go from day one.

This foundation has helped us scale usage in the enterprise beyond our initial adopting platforms.

4. Standardizing with Frameworks and Toolkits

As adoption grew, so did the need for structure.

To standardize and optimize how we build services we created a “Paved Road” or “Golden Framework” which has built-in support for non-functional requirements like observability, asynchronous health checks, graceful shutdown, and security.

This paved road is built from our “Off-Roading” toolkit. The toolkit is a collection of internal packages that often wrap open-source packages tailored to our unique internal infrastructure. A great example of this toolkit is our logging package, which wraps the standard library structured logger slog to support our internal logging format, enable asynchronous logging with buffering and truncation policies.

5. Performance Tooling Built-in

Go can help our engineers spend less time debugging performance issues and more time building.

Out of the box, Go comes with native tools like pprof, and benchmark tests which have streamlined performance profiling.

The low garbage collection pause times, and efficient use of goroutines, is a perfect fit for our low latency, high scale payments platforms.

Not to mention how Go’s efficient resource usage has helped us reduce infrastructure overhead.

6. Integrating with Observability Tools

We were initially concerned about Go’s compatibility with our observability stack. Thankfully, integration was smoother than expected. Metrics, tracing, and logging worked well with our existing tools, with little customization required—a critical win for production reliability.

7. Fostering a Thriving Internal Community

One of the most valuable outcomes of Go adoption has been the emergence of a strong internal community. Nearly 1,000 engineers actively collaborate in our internal Go channel, supported by monthly meetups, internal conferences, and the continuously evolving paved road framework, off-roading toolkit, & documentation.

This peer-driven culture has been essential in refining best practices and driving innovation.

Looking Ahead

Go is now a core part of our engineering toolkit for our performance critical platforms, and we continue to invest in making it even more powerful for our teams. By listening to developers, solving real challenges, and sharing what works, we’ve created a sustainable path for Go at American Express—one built on continuous learning and community-driven growth.

You can read our original Choosing Go blog here.

]]>
A New Method for Enhancing Neural Network Training2025-06-13T00:00:00-04:002025-06-13T00:00:00-04:00https://americanexpress.io/a-new-method-for-enhancing-neural-network-trainingAmerican Express is known for having best-in-class credit ratings and fraud detection. More than 800 American Express colleagues work together to provide precise data that drives informed credit decision. One of the ways we stay at the forefront of data science research, discover cutting-edge technologies, and connect with emerging talent is through partnerships with colleges and universities across the globe. We have a long-standing relationship with Imperial College London’s Department of Mathematics. Every year, we supervise graduate students on their Summer Research Project and focus on identifying promising technical challenges that can meaningfully impact our work in risk assessment and fraud detection.

Our latest collaboration led to the development of EDAIN (Extended Deep Adaptive Input Normalization), a novel preprocessing technique that improves how neural networks handle complex financial time series data. Based on our results, we were able to publish the paper at the AISTATS conference a few years ago. We’ll dive into the challenge we tackled, the solution, the results we’ve seen so far with the EDAIN technique, and the future steps for research.

Preprocessing Complex Financial Data

In credit risk and fraud detection, our data presents unique challenges:

  • Transaction patterns often show multiple distinct behaviors, which can cause the data to be distributed with multiple modes (see D_48 as an example in Figure 1);
  • Credit utilization tends to be unevenly distributed, causing feature values to be distributed according to a power law, which causes skewness in the data. This is illustrated with the D_39 feature shown in Figure 1;
  • Unusual spending behavior creates outliers (see for example P_2, Figure 1);
  • Many customer characteristics used in modelling have a high proportion of missing values;
  • Due to changing macroeconomic conditions, customer characteristics also change over time, making the time series features non-stationary.

Histogram of three different variables

Figure 1: Histogram of three different variables from the American Express credit risk dataset, showing how financial data exhibit skewness, outliers, and multiple modes. “Extended Deep Adaptive Input Normalization for Preprocessing Time Series Data for Neural Networks” by Marcus A. K. September, Francesco Sanna Passino, Leonie Goldmann, and Anton Hinel is licensed under CC BY 4.0.

Why preprocessing is essential

If these data irregularities are not appropriately treated, the model performance can degrade significantly and predictions on out of time samples might be heavily biased, especially if using a neural network model. Additionally, appropriate data preprocessing can increase model training efficiency, enabling quicker model iteration. The input will be on the same scale as the initial model weights, meaning the first couple of weight updates do not need to adjust for differences in magnitude.

Traditional preprocessing methods, like standard scaling and min-max scaling:

  • Do not adequately handle common irregularities like skewness and outliers;
  • Need frequent adjustments as data patterns change due to changing macroeconomic conditions;
  • Often struggle with complex non-stationary financial time series.

EDAIN: A Novel Solution

Our technique introduces four key innovations:

  • Automatically identifies the optimal data processing procedure for the data and problem at hand in an end-to-end fashion;
  • Dynamically adjusts to changes in data distribution in non-stationary time series;
  • Intelligently handles data with outliers and skewed distributions;
  • Offers flexibility to work with both global and local data patterns.

Architecture of the proposed EDAIN layer

Figure 2: Architecture of the proposed EDAIN (Extended Deep Adaptive Input Normalization) layer. The input time-series is passed through four consecutive preprocessing sublayers before being fed as input to the deep neural network model. “Extended Deep Adaptive Input Normalization for Preprocessing Time Series Data for Neural Networks” by Marcus A. K. September, Francesco Sanna Passino, Leonie Goldmann, and Anton Hinel is licensed under CC BY 4.0.

How it works:

  • The EDAIN method consists of four distinct sublayers applied in consecutive order.
  • The first sublayer mitigates the effect of outliers with a smoothed winsorization operation using the tanh function. This results in pushing outliers closer to the regular values.
  • The second and third sublayers apply a shift and scale operation, respectively, and give a similar effect as standard scaling. However, when the EDAIN layer is configured to adapt to local data patterns, the location and scale used for normalization dynamically adapts to the data.
  • The fourth sublayer handles skewness by applying a Yeo-Johnson power transformation
  • All the preprocessing parameters for each of the four sublayers are optimized end-to-end with the neural network model during training through stochastic gradient descent. This allows the data preprocessing to be tailored to the specific machine learning problem we are trying to solve.

We found that the method:

  • Reduces manual preprocessing effort
  • Improves model performance and training efficiency
  • Adapts automatically to new data patterns
  • Integrates seamlessly with existing neural network architectures

We compared the proposed EDAIN method against both conventional normalization methods and methods from recent research in extensive cross-validation experiments on two real-world datasets and one synthetic dataset. We found EDAIN to significantly improve accuracy across all datasets considered. These benefits can allow financial services institutions to spend less time on preprocessing their complex data and more time on improving their machine learning models elsewhere, for example, with feature engineering.

The results were promising and looking ahead, we’re looking at improving the handling of missing data, expanding applications within risk assessment, and integrating EDAIN with other machine learning innovations. If you’re interested in reading more about the process, you can read the full paper.

]]>
A Quantum Introduction for Non-Quantum Experts2025-04-10T00:00:00-04:002025-04-10T00:00:00-04:00https://americanexpress.io/a-quantum-introduction-for-non-quantum-expertsIntroduction

I have been intrigued by quantum physics for a very long time – even prior to taking my first quantum class as an undergraduate physics student. Over the past decade, I’ve had the opportunity to work on a wide range of quantum-related research projects involving particle accelerators, correlations between individual photons, quantum-based communication systems, and quantum algorithms. To say that I find quantum interesting would certainly be an understatement, and I’m thrilled about the potential impact that quantum technologies can have across a wide range of fields and industries.

One of the things I am most excited about related to quantum research is the increased level of interest from individuals with backgrounds and specializations outside of quantum. I believe that to fully realize the potential of quantum in the commercial space, it is imperative that quantum researchers have the ability to effectively collaborate with individuals spanning a broad spectrum of expertise. This includes customers, non-quantum researchers, engineers of varying disciplines, project managers, executive leadership, and more. Given that quantum is a relatively new area within most industries, I feel that one of my key responsibilities as a quantum researcher is education. Specifically, to help my non-quantum colleagues understand the aspects of quantum that have the potential to help them develop novel solutions for their problems of interest.

While there is a wealth of information available online related to quantum, it can be challenging to find one, concise source to serve as a well-rounded introduction to the subject for someone with a professional or personal interest.

To help rectify that, I’ve put together this brief overview to provide some foundational knowledge about quantum to help demystify it and aid in future learning. To do that, I’ll first address what quantum physics is and then provide some insight into some of the fundamental concepts that make quantum so unique, interesting, and useful from a technological standpoint.

What is Quantum Mechanics?

Quantum mechanics is a branch of physics that describes how things work at the atomic and subatomic scale. In other words, the components of the universe that are very small. It provides the most accurate description for the nature of things like atoms, protons, electrons, photons, etc. We refer to things described using quantum mechanics as quantum systems, and the way that we describe and quantify those quantum systems is through what we call quantum states. Quantum states, along with quantum superposition and non-deterministic measurement form some of the core concepts that need to be understood in order to study quantum physics. So, let’s discuss each of them in more detail using a coin flip analogy.

Quantum States

A flipped coin is certainly not a quantum system, but let’s imagine for a moment that it is. When we flip a fair coin, we know that the result will be either heads or tails. So, we can say that our coin has two possible states at the end of the flip; heads, which we will denote as H, and tails, which we will denote as T. Mathematically, we can represent these states using vectors of two-elements since there are only two possible states. If you’re not familiar with vectors, it is sufficient to think of them as bookkeeping devices for our current purposes. Using vectors, we can express our coin flip states as follows:

Equation 1

Quantum researchers often use a notation called Dirac notation, which is named after the physicist Paul Dirac, and it often makes performing certain quantum calculations less cumbersome. Since Dirac notation is so commonly used, it is worth taking one more step and expressing our quantum system and states using it. Rewriting our flipped coin quantum states in Dirac notation gives us the following:

Equation 2

The new |···〉 symbol is called a ket. Let’s go ahead and choose to represent the general state of our quantum coin in Dirac notation as |ψ〉. We could have used any symbol inside of the ket, but I chose the psi symbol (ψ) simply because it is a common choice. With our notation sorted out, we can now formally write the result of our quantum coin flip. At the end of our coin flip, we will find that our quantum system (coin) is in one of our two possible states.

Equation 3

Quantum Superposition

Keeping with our quantum coin analogy, we’ve discussed what happens when we measure the coin flip, which yields one of two possible states (H or T), but what happens before that? What does the quantum state of the flipped coin look like before we measure the result? Quantum systems have a unique property called quantum superposition, which means that a quantum system can exist in multiple states at the same time. It is only through the act of measurement, or more precisely interference, that a quantum system ends up in one specific state. So, applying this to our quantum coin flip means that the coin is in both the H and T states at the same time until we measure it. This means that, formally, our quantum coin state before measurement is written as follows:

Equation 4

What the equation above tells us, in Dirac notation, is that our quantum coin is in both the |H〉 state (heads) and the |T〉 state (tails) at the same time before being measured. The numbers in front of the |H〉 and |T〉 states in the equation are what we call quantum amplitudes. For those of you familiar with linear algebra, these are the coefficients of a linear combination of the states. The meaning of these quantum amplitudes brings us to our final foundational concept, which is non-deterministic measurements.

Non-Deterministic Measurements

If our flipped coin was a regular, non-quantum coin, then it seems very reasonable to assume that we could create and build a device that would allow us to flip the coin in such a way that we got the same result every time. We could use something like a mechanized flipping device and shock-absorbing floor to ensure that our coin lands in the exact same way every time. In physics, we call this a deterministic system, which means if we know all of the appropriate variables well enough, we can perfectly predict the outcome of an event. If we use our highly-engineered coin flipping device and flip our non-quantum coin 1000 times, then we will record the same result 1000 times (let’s just say heads in this case).

What happens if we use our quantum coin in our coin flipping machine instead? If you conduct the same 1000 flip experiment and record the results, you will find something very odd. You will find that your coin flip resulted in heads around 50% of the time and tails the other 50% of the time. The reason for this is that quantum system measurements are probabilistic. What this means is that each possible measurement outcome of a quantum system has a probability associated with it. The theoretical probability associated with measuring a particular state can be determined using the quantum amplitudes present in our superposition equation. By applying something called the Born rule to our quantum coin example, the probabilities of measuring a heads or tails state are (PH and PT) found by squaring the magnitude of our quantum amplitudes as shown below.

Equation 5 Equation 6

Implications of Quantum Technology

Advancements in quantum technology have the potential to cause major disruptions across a range of industries and disciplines. For example, quantum computers enable new types of algorithms with the potential to solve problems that modern computers cannot solve. However, they also create new challenges related to foundational elements of modern computing, such as security and privacy. For instance, quantum computers have the potential to break current encryption methods, jeopardizing modern security systems. This makes it imperative to develop new cryptographic techniques to ensure robust data protection. New cryptographic techniques could take the form of quantum algorithms, quantum-based physical layer protections in a communications network, or altering current algorithms to quantum-resistant versions.

The potential impact of quantum technologies is enormous, and as the technology advances, the knowledge of how to incorporate it into engineering problems will be invaluable. As a result, it will be essential that engineers and business leaders understand the impact, challenges, and opportunities that quantum research creates.

]]>
Todd HodgesGenerative AI Meets Open Source at American Express2025-04-02T00:00:00-04:002025-04-02T00:00:00-04:00https://americanexpress.io/generative-ai-meets-open-source-at-american-expressOver the past year, we have witnessed an explosion of thousands of open source Generative AI projects ranging from commercially backed large language models (LLMs) like Meta’s LLaMA to grassroots experimental frameworks. Developments in this field are occurring at a staggering pace, and they demonstrate how Generative AI combined with open source are reshaping technology. At American Express, we see open source as a pathway to more innovative and sustainable AI solutions, especially in the rapidly evolving field of Generative AI, which thrives on the vast datasets and diverse contributions that open source communities can provide.

One of our key initiatives with Generative AI and open source is ConnectChain. It’s our orchestration framework derived from LangChain, which is one of the most prominent open source Generative AI frameworks in the open source community with over 87,000 stars on GitHub. It’s renowned for its LangChain Expression Language (LCEL), which simplifies Python syntax and facilitates the composition of Generative AI workflows in a declarative way. We selected LangChain because it was the most widely adopted framework at the time and it met our needs in terms of functionality, for example using open source frameworks keeps our models more on-topic.

Our contributions to ConnectChain expand the capabilities of LangChain, introducing additional functionality designed specifically to cater to the needs of our broad enterprise userbase and use cases. Today, ConnectChain is a fully open source, enterprise-grade Generative AI platform equipped with a suite of utilities tailored for AI-enabled applications. Its primary objective is to bridge the gap between the needs of large enterprises and the capabilities offered by existing frameworks.

ConnectChain brings together a range of key features and advanced functionalities to enhance deployment, security, and usability, including:

  • Unified deployment configuration: ConnectChain features a unified YAML configuration that facilities efficient deployments across diverse environments without requiring specific adapters of separate implementations. This allows for a more streamlined and scalable deployment process.
  • Enhanced security, authentication, and authorization: The framework simplifies the authentication process for API-based LLMs services by incorporating a login capability that integrates with Enterprise Authentication Services (EAS). It automates the generation of JWT authorization tokens, which are securely passed to the modelling service provider. Additionally, ConnectChain includes configuration-based outbound proxy support at the model level, ensuring secure integration with enterprise-level security protocols and safeguarding data and model interactions within corporate networks.
  • Customizable AI interactions: ConnectChain enhances the LangChain packages by adding hooks to allow for custom-built validation and sanitization logic in the inference chain, giving users greater control over AI-generated content. These hooks enable precise tailoring of prompts, aligning outputs with specific enterprise standards and/or expectations.
  • Operational enhancements: The framework supports a reverse-proxy to facilitate smooth deployment within enterprise environments and included LCEL enhancements for additional logging utilities. A configurable, swappable LLM interface, complete with EAS token support, further enhances operational capabilities and flexibility.

These innovations within the ConnectChain framework have been shaped by solutions designed to enhance the quality and consistency of our enterprise applications at American Express. We faced common challenges typical of a banking infrastructure, which demands complex networking, secure EAS, observability, and the intricacies of production deployments. These elements are critical when transitioning from experimental applications to fully-scaled enterprise solutions. As we navigate obstacles, particularly in areas like governance, quality control, and security, each successful solution is integrated back into ConnectChain, forming robust, reusable enterprise implementations.

As a result, our operational capabilities across engineering teams have significantly improved. The pace of development has accelerated, as teams no longer need to devise their own solutions for the complex issues that often arise when scaling AI applications, like handling enterprise authentication or session management. We’ve successfully implemented high-level integrations with our existing enterprise tools, enabling many teams to adopt the framework seamlessly, without the need for explicit setup. We continue to gather formal feedback through surveys, but anecdotal evidence already suggests high satisfaction levels. The framework has received widespread approval from numerous teams and departments. To foster a global community of developers around ConnectChain, we offer ongoing support and regularly update our repositories with new examples to facilitate easy onboarding for new users.

As we expand and refine ConnectChain, we are excited to collaborate with other open source developers on this journey. Community contributions can help enhance ConnectChain’s capabilities and bring fresh perspectives and innovative solutions to tackle real-world challenges. Together, we can build more efficient AI tools that meet today’s needs and pave the way for the future.

If you’re interested in innovating together, please visit our GitHub page to get started.

]]>
Simplifying Unit Tests with Custom Matchers: Cleaner Code, Clearer Purpose2023-01-09T00:00:00-05:002023-01-09T00:00:00-05:00https://americanexpress.io/cleaner-unit-tests-with-custom-matchersWhen unit testing, it’s important to cover all your edge cases, but that can come at a cost. Covering edge cases often means making the same or similar assertions over and over again. While test names should clearly describe what is being tested, sometimes these assertions can be messy and have an unclear purpose. Using custom matchers in your tests can help make your assertions cleaner and less ambiguous.

Note: the example used in this article is written using the Jest testing framework.

Let’s take a look at an example. I needed to test several cases to see if cacheable-lookup was installed on an Agent. cacheable-lookup adds some symbol properties to any Agent it’s installed on. We just need to look at the agent’s symbol properties and see if they exist there. The assertion may look something like this:

expect(Object.getOwnPropertySymbols(http.globalAgent)).toEqual(expect.arrayContaining(cacheableLookupSymbols));

So when we are testing that cacheable-lookup gets successfully uninstalled our spec would be similar to the below.

it('should uninstall cacheable lookup when DNS cache is not enabled and cacheable lookup is installed', () => {
  installCacheableLookup();
  expect(Object.getOwnPropertySymbols(http.globalAgent)).toEqual(expect.arrayContaining(cacheableLookupSymbols));
  expect(Object.getOwnPropertySymbols(https.globalAgent)).toEqual(expect.arrayContaining(cacheableLookupSymbols));
  setupDnsCache();
  expect(Object.getOwnPropertySymbols(http.globalAgent)).not.toEqual(expect.arrayContaining(cacheableLookupSymbols));
  expect(Object.getOwnPropertySymbols(https.globalAgent)).not.toEqual(expect.arrayContaining(cacheableLookupSymbols));
});

Now we’ve got a working test, but it’s quite repetitive and a little hard to read, a problem that will just be exacerbated when we add more use cases. It’s also not very clear to the next engineer to come across our code what the significance is of each of these assertions. Grokable tests can act as an extension of your documentation, and we’re missing out on that here. Let’s refactor this with a custom matcher to make it DRY, more readable, and more easily comprehendable.

We’ll do this by calling expect.extend, and to keep things simple we’ll reuse the same toEqual matcher from before. Reusing the built-in matchers means that there are fewer implementation details for us to worry about in our custom matcher.

Keeping the matcher in the same file as the tests will reduce indirection and keep the tests grokable. It’s important that we keep it easy for others to understand what exactly the matcher is doing, and, since the matcher is added globally to expect, that can become difficult if we move the matcher to a different file.

Now, let’s give the matcher a really explicit name that tells us exactly what the assertion is checking for, toHaveCacheableLookupInstalled.

import matchers from 'expect/build/matchers';

expect.extend({
  toHaveCacheableLookupInstalled(input) {
    return matchers.toEqual.call(
      this,
      Object.getOwnPropertySymbols(input.globalAgent),
      expect.arrayContaining(cacheableLookupSymbols)
    );
  },
});

Now that we have our custom matcher, we’re ready to refactor those assertions.

it('should uninstall cacheable lookup when DNS cache is not enabled and cacheable lookup is installed', () => {
  installCacheableLookup();
  expect(http).toHaveCacheableLookupInstalled();
  expect(https).toHaveCacheableLookupInstalled();
  setupDnsCache();
  expect(http).not.toHaveCacheableLookupInstalled();
  expect(https).not.toHaveCacheableLookupInstalled();
});

Now our tests are cleaner, but our failure message is not great. Reusing a built-in matcher worked well for us to get things running quickly, but it does have its limitations. Take a look at what we see if we comment out the function that is uninstalling cacheable-lookup.

  ● setupDnsCache › should uninstall cacheable lookup when DNS cache is not enabled and cacheable lookup is installed

    expect(received).not.toEqual(expected) // deep equality

    Expected: not ArrayContaining [Symbol(cacheableLookupCreateConnection), Symbol(cacheableLookupInstance)]
    Received:     [Symbol(kCapture), Symbol(cacheableLookupCreateConnection), Symbol(cacheableLookupInstance)]

      59 |     expect(https).toHaveCacheableLookupInstalled();
      60 |     // setupDnsCache();
    > 61 |     expect(http).not.toHaveCacheableLookupInstalled();
         |                      ^
      62 |     expect(https).not.toHaveCacheableLookupInstalled();
      63 |   });
      64 |

It’s the same as before the refactor, but now it’s worse because the matcher hint still says toEqual even though we’re now using toHaveCacheableLookupInstalled. If we were to write a custom matcher from scratch we could make this test more effective. We can fix the hint and add a custom error message with a more explicit description of the failure.

expect.extend({
  toHaveCacheableLookupInstalled(input) {
    const { isNot } = this;
    const options = { secondArgument: '', isNot };
    const pass = this.equals(Object.getOwnPropertySymbols(input.globalAgent), expect.arrayContaining(cacheableLookupSymbols))
    return {
      pass,
      message: () => `${this.utils.matcherHint('toHaveCacheableLookupInstalled', undefined, '', options)
      }\n\nExpected agent ${this.isNot ? 'not ' : ''}to have cacheable-lookup's symbols present`,
    };
  },
});

Here we’ve used this.equals to do our comparison, and this.utils.matcherHint to fix the name of our matcher in the hint. this.utils.matcherHint is not very well documented, so you may have to source dive to better understand the API. The order of arguments is matcherName, received, expected, and finally options. Using an empty string for expected prevents the hint from looking like our matcher requires an expected value.

See how greatly this improved our error message:

  ● setupDnsCache › should uninstall cacheable lookup when DNS cache is not enabled and cacheable lookup is installed

    expect(received).not.toHaveCacheableLookupInstalled()

    Expected agent not to have cacheable-lookup's symbols present

      61 |     expect(https).toHaveCacheableLookupInstalled();
      62 |     // setupDnsCache();
    > 63 |     expect(http).not.toHaveCacheableLookupInstalled();
         |                      ^
      64 |     expect(https).not.toHaveCacheableLookupInstalled();
      65 |   });
      66 |

We’ve already made some great improvements to our test suite, but we can make it even better. By further customizing our matcher and getting away from the simple this.equals, we can make our test assert not only that all of the symbols are present when cacheable-lookup is installed, but that none of them are present when it shouldn’t be installed rather than just “not all of them.” We’ll use this.isNot to conditionally use Array.prototype.some or Array.prototype.every when we look for the symbols on the agent depending on whether cacheable-lookup should be installed.

expect.extend({
  toHaveCacheableLookupInstalled(input) {
    const { isNot } = this;
    const options = { secondArgument: '', isNot };
    const agentSymbols = Object.getOwnPropertySymbols(input.globalAgent);
    const pass = isNot
      ? cacheableLookupSymbols.some((symbol) => agentSymbols.includes(symbol))
      : cacheableLookupSymbols.every((symbol) => agentSymbols.includes(symbol));
    return {
      pass,
      message: () => `${this.utils.matcherHint('toHaveCacheableLookupInstalled', undefined, '', options)
      }\n\nExpected agent ${isNot ? 'not ' : ''}to have cacheable-lookup's symbols present`,
    };
  },
});

Now on top of having a clean, DRY test that’s easy to understand and a matcher that we can reuse throughout the rest of our test suite, we have assertions that are even more effective than the simple (but hard to read) toEqual check we started with.

Remember, keeping your custom matcher at the top of the same file that the tests using it are in is vital to its usefulness. If you do not, other engineers may not know that it is a custom matcher and not know where it comes from. The last thing you want is for your teammates to waste hours searching the internet for documentation on a matcher that doesn’t exist outside your codebase. It’s also important that your matcher is easily understandable. Personally I’m partial to cacheableLookupSymbols.every(agentSymbols.includes.bind(this)), but being explicit in our matcher provides more value than being terse.

Check out the original pull request to One App that inspired this blog post.

]]>
Jamie KingMastering Kotlin: Use-Site Targets2019-12-03T00:00:00-05:002019-12-03T00:00:00-05:00https://americanexpress.io/advanced-kotlin-use-site-targetsAnnotation Use-Site targets

This is our second entry in our on-going “Advanced Kotlin” blog series. Be sure to check out our first post; Advanced Kotlin – Delegates if you haven’t yet.

In this post we’re going to take a deep-dive into Kotlin Annotation Use-Site targets.

You’ve probably used @get and @set in Kotlin before, but have you come across @receiver or @delegate? Kotlin provides us with nine different Annotation use-site targets. In this post we’ll cover all of them. By writing Kotlin code that uses each of these annotation use-site targets and then decompiling from Kotlin into Java using the Show Kotlin Bytecode -> Decompile tool in IntelliJ we’ll see exactly where each annotation ends up in the resulting code.

The Basics

But first, what exactly are use-site targets and why do we need them?

Many of the libraries, frameworks, and tools we use in Kotlin are actually designed for Java. Either at compile time or at runtime a library may require an annotation to be in a very specific place in your code and/or bytecode for it to function correctly. However, Kotlin is not Java and therefore doesn’t have some of the constructs expected in a Java program, and Kotlin also has some new constructs which are not available to Java. We can use annotation use-site targets to bridge this gap so that various libraries will work as expected.

Simply put, annotation use-site targets allow any @Annotations in your source code to end up at a very specific place in your compiled bytecode or in the Java code generated by kapt.

A Simple Example

Let’s start with a simple example that many of you may have already written yourselves:

@get:SomeAnnotation
var someProperty: String? = null

By prefixing the annotation with @get: the resulting bytecode will have the annotation on the generated getter function:

// Decompiled from the resulting Kotlin Bytecode
@SomeAnnotation
@Nullable
public final String getSomeProperty() {
    return this.someProperty;
}

However, the member variable, the setter function, and any function parameters will not have the SomeAnnotation annotation. In many cases this may be what a framework requires, be it @get:Rule for a JUnit test, @get:ColorRes for an Android resource, or @get:GET for a Retrofit interface.

Overview

In this article we’ll be looking at all nine of the annotation use-site targets available in Kotlin:

  • @get
  • @set
  • @file
  • @param
  • @field
  • @setparam
  • @receiver
  • @property
  • @delegate

Some of these will be either familiar or obvious, such as @get and @set, but some are less obvious and less used, such as @delegate and @receiver. Knowing all of the above is good knowledge to have even if you’ll only use it a handful of times.

We will be using the same method as used above with the @get example in order to test where an annotation resides in decompiled bytecode when using each use-site targets. Each example will begin with an annotation that can be used in our sample code that matches the use-site target name so that it’s easy to spot:

annotation class GetAnnotation
annotation class SetAnnotation
annotation class PropertyAnnotation
annotation class ReceiverAnnotation
...etc...

We’ll then add the matching use-site target to the annotation in some sample Kotlin code, decompile the bytecode and see where the resulting annotation lives.

You may notice that many items on this list closely match Java naming conventions (getters, setters, parameters, etc) as they are intended to help facilitate Kotlin-Java interop. Kotlin’s annotation use-site targets currently only affect JVM bytecode not Kotlin/native, Kotlin/js, or Kotlin Multiplatform projects. This may change in the future, especially for multiplatform language targets such as Swift.

Get and Set use-site targets

Let’s start with the few annotation use-site targets that you’ve probably already seen or used. As an added bonus I’ll also include how you can provide multiple annotations to a single use-site target using [] brackets.

annotation class GetAnnotation
annotation class SetAnnotation
annotation class SetAnnotation2

class Person(
    @get:GetAnnotation val first: String,
    @set:[SetAnnotation SetAnnotation2] var last: String
) {
    // ...
}

The resulting decompiled bytecode is as follows:

public final class Person {
    @NotNull  private final String first;
    @NotNull  private String last;

    @GetAnnotation
    @NotNull
    public final String getFirst() {
        return this.first;
    }

    @NotNull
    public final String getLast() {
        return this.last;
    }

    @SetAnnotation
    @SetAnnotation2
    public final void setLast(@NotNull String var1) {
        this.last = var1;
    }

    public Person(@NotNull String first, @NotNull String last) {
        super();
        this.first = first;
        this.last = last;
    }
}

As you can see, the @GetAnnotation is on the getter and both @SetAnnotation and @SetAnnotation2 are on the setter. They don’t appear anywhere else in the code.

File use-site targets

@file is most commonly used with @file:JvmName in order to provide a custom name for a file. This is useful when a file only contains top-level functions or constants. In this case the Kotlin compiler creates a class called YourFileNameKt, which ends with Kt. This can look odd when used from Java:

@file:JvmName("HttpConstants")
const val HTTP_OK = 200
const val HTTP_NOT_FOUND = 404

Without the above @file:JvmName annotation the usage of these constants from java would appear as HttpConstantsKt.HTTP_OK instead of HttpConstants.HTTP_OK.

The @file use-site target can also be used with other annotations, but since this annotation does not end up in the resulting bytecode it would normally be used by Kotlin-specific tools and libraries, not Java ones. An example of this would be using @file:Suppress to suppress lint or detekt rules for an entire file.

Field and Parameter use-site targets

Let’s look at how @param and @field affect resulting bytecode by using both in a single piece of code on similar class properties:

annotation class ParamAnnotation
annotation class FieldAnnotation

class Person(
    @param:ParamAnnotation val first: String,
    @field:FieldAnnotation val last: String
) {
    // ...
}

The resulting decompiled bytecode is:

public final class Person {
    @NotNull private final String first;

    @FieldAnnotation
    @NotNull
    private final String last;

    @NotNull
    public final String getFirst() {
        return this.first;
    }

    @NotNull
    public final String getLast() {
        return this.last;
    }

    public Person(@ParamAnnotation @NotNull String first, @NotNull String last) {
        super();
        this.first = first;
        this.last = last;
    }
}

As you can see, where @param was used on first, only the parameter to the constructor is annotated. If this was a var instead of a val the parameter to the setter function would not be annotated. We’ll see @setparam shortly which can be used to target setter parameters.

last, being annotated with @field, only has an annotation on the private field used by the getter function. The getter/setter function and the constructor params do not contain the FieldAnnotation annotation.

These can be useful for targeting specific pieces of code when using a dependency injection framework:

class MyClass @Inject constructor(
    @param:SpecificString private val str: String
)

...

@Inject
@field:MainThread
lateinit var scheduler: Scheduler

Setter Parameter use-site targets

As mentioned above, you can use @setparam to add annotations to setter parameters. Let’s compare @set and @setparam in the same example so you can clearly see the difference.

annotation class SetParamAnnotation
annotation class SetAnnotation

class SetParamAnnTest() {
    @setparam:SetParamAnnotation
    @set:SetAnnotation
    var myStr: String? = null
}

Here we’ve added two use-site targets to the same property; @setparam and @set. The resulting decompiled bytecode shows the difference between these 2:

public final class SetParamAnnTest {
    @Nullable
    private String myStr;

    @Nullable
    public final String getMyStr() {
        return this.myStr;
    }

    @SetAnnotation
    public final void setMyStr(@SetParamAnnotation @Nullable String var1) {
        this.myStr = var1;
    }
}

As expected, both the field and getter method are not annotated. As we’ve seen before using @set resulted in the annotation being applied to the setter function. The @setparam use-site target added an annotation to the parameter expected by the setter function.

A combination of these can be useful with some java-based dependency injection libraries:

@set:Inject
@setparam:[Nullable SomeString]
lateinit var str: String

Receiver use-site targets

In Kotlin a “receiver” is the instance on which an extension function is defined, or the type for a lambda with receiver. It is essentially the type on which a block of code is intended to run. In the case of a lambda with receiver in Kotlin, the lambda runs as if it is part of the receiving class. In the case of extension functions, the defined function also runs as if it is part of the receiving class type. The bytecode for extension functions actually contain the receiver as the first parameter to a static function. This allows extension functions to be used by Java code as well as Kotlin code by simply passing the receiving type when used from Java. Knowing this helps us better understand the resulting bytecode and therefore also understand where an @receiver annotation will reside.

annotation class ReceiverAnnotation

fun @receiver:ReceiverAnnotation String.capitalizeVowels() =
    this.map {
        if (it in listOf('a', 'e', 'i', 'o', 'u')) it.toUpperCase() else it
    }.joinToString("")

The result is the function’s receiver parameter being annotated:

public static final String capitalizeVowels(
    @ReceiverAnnotation @Nullable String $receiver
) {
    ...
}

Property use-site targets

@property is the only use-site target that has no effect on the resulting bytecode when viewed from Java. This is due to the fact that Java does not have the same notion of properties as Kotlin. Because of this, adding property-specific annotations will only be useful to kotlin-specific libraries and tools. To demonstrate this we will add two @property-targeted annotations to a class and then use the Kotlin reflection library to see which items are annotated.

annotation class PropertyAnnotation

class SomeClass1(
    @property:PropertyAnnotation val constructorProp: String
) {
    @property:PropertyAnnotation
    val regProp = "Test"
}

As you can see, we’ve added PropertyAnnotation to both a constructor argument and also to a regular Kotlin property.

If we decompile the kotlin bytecode you will see that these annotations are not present in what would be the Java equivalent of this Kotlin code:

public final class SomeClass1 {
    @NotNull
    private final String regProp;

    @NotNull
    private final String constructorProp;

    @NotNull
    public final String getRegProp() {
        return this.regProp;
    }

    @NotNull
    public final String getConstructorProp() {
        return this.constructorProp;
    }

    public SomeClass1(@NotNull String constructorProp) {
        this.constructorProp = constructorProp;
        this.regProp = "Test";
    }
}

In order to read the property annotations we will need to use Kotlin’s reflection library to read the annotations at runtime. This would be similar to how other libraries or tools would need to make use of property-targeted annotations.

fun main(args: Array<String>) {
    for (prop in SomeClass1::class.memberProperties) {
        println("${prop.name} has annotations ${prop.annotations}")
    }
}

The output from this code shows that each member property has the PropertyAnnotation annotation specified:

constructorProp has annotations [@PropertyAnnotation()]
regProp has annotations [@PropertyAnnotation()]

Delegate use-site targets

The previous blog post in our “Advanced Kotlin” series covered delegates in Kotlin. If you are not familiar with delegates I recommend you check out that post.

The @delegate use-site target might be one the hardest to guess where the actual annotation will end up. Is it on the delegate itself? On the delegate class? On the function used to receive the value from the delegate? Let’s take a look at some code and see what happens.

annotation class DelegateAnnotation

class MyDel {
    @delegate:DelegateAnnotation
    val name by lazy { "something" }
}

As you can see from the decompiled code, the annotation will reside on the generated private backing property whose type is the delegate type. The function used to call the delegate (in this case getName) does not receive the annotation.

public final class MyDel {

    @DelegateAnnotation
    @NotNull
    private final Lazy name$delegate;

    @NotNull
    public final String getName() {
        Lazy var1 = this.name$delegate;
        KProperty var3 = $$delegatedProperties[0];
        return (String)var1.getValue();
    }

    public MyDel() {
        this.name$delegate = LazyKt.lazy((Function0)null.INSTANCE);
    }
}

You may find @delegate useful for targeting a property that you wish to wrap in something like a lazy delegate:

@delegate:Transient
val myLock by lazy { ... }

Default Targets

Now that we’ve covered all nine of the available use-site targets, let’s cover what happens when a target is not specified on an annotation.

@SomeAnnotation
val str: String? = null

When an annotation is created, it can itself be annotated with an @Target annotation which lists the targets that are available for the annotation. The available target values for @Target are defined in the AnnotationTarget enum:

  • ‘CLASS’
  • ‘ANNOTATION_CLASS’
  • ‘TYPE_PARAMETER’
  • ‘PROPERTY’
  • ‘FIELD’
  • ‘LOCAL_VARIABLE’
  • ‘VALUE_PARAMETER’
  • ‘CONSTRUCTOR’
  • ‘FUNCTION’
  • ‘PROPERTY_GETTER’
  • ‘PROPERTY_SETTER’
  • ‘TYPE’
  • ‘EXPRESSION’
  • ‘FILE’
  • ‘TYPEALIAS’

The main purpose of these values is to let the compiler know where in source code an annotation is allowed to be used. For example, if an annotation is defined with the CLASS target:

@Target(AnnotationTarget.CLASS)
annotation class ClassAnnotation

Then the annotation can only be applied to a class:

@ClassAnnotation // OK
class Temp {
    @ClassAnnotation // Compilation error!
    val str: String? = null
}

These target values along with the placement of the annotation are used to pick a default use-site target for an annotation if one is not specified. If there’s more than one applicable target either @param (constructor parameters), @property , or @field are used, in that order.

Let’s see an example of this. Using the same example code as above:

@SomeAnnotation
val str: String? = null

If the @SomeAnnotation annotation was defined with @Target(FIELD) then your annotation usage would target using @field as though your code was written as @field:SomeAnnotation. As we saw earlier, using @field means that the private field value would have the annotation in the bytecode. Now, if the @SomeAnnotation annotation in this example had no @Target defined, then it would apply to any element meaning that @property would be chosen first from the list (@param, @property, @field). Remember that @property is only visible from Kotlin so the resulting annotation would not be useful from Java. If you’ve ever struggled with a framework not finding an annotation that you thought you had added to the code, this is the likely culprit.

Stay tuned! More to come!

This article hopefully covered some annotation use-site targets that you have not encountered before. Knowing each of these can often get you out of a bind when trying to use a framework that requires very specific placement of annotations in your code.

If you would like to read more about use-site targets in Kotlin take a look at:

Stay tuned for more!

]]>
Brent Watson