Skip to content

James Williams

Pulling a Smartsheet table into Microsoft Excel using Power Query

Well if you thought my first post in eight months would be exotic, go ahead and smash that back button.

I use this technique when we have one-off assignments at work where I need a quick and dirty web-based data store that several people can collaborate on, and that can be easily queried in Excel without any intermediate infrastructure or processing. This would be quite trivial if not for Smartsheet’s intractible API format.

Assuming you have a Smartsheet grid you want to mirror in Excel and that can be refreshed on the fly, you’ll need the sheet’s ID and an API bearer token for a user with viewer permissions.

In Excel, open Power Query and create a new query using the advanced editor. Make sure to replace $SHEET_ID and $BEARER_TOKEN . The query will bring in both your data and column headers.

let
    Source = Json.Document(
        Web.Contents("https://api.smartsheet.com/2.0/sheets/$SHEET_ID", [
            Headers=[
                #"Content-Type"="application/json",
                Authorization="Bearer $BEARER_TOKEN"
            ]
        ])
    ),

    // Process rows
    RowsData = Source[rows],
    RowsTable = Table.FromList(RowsData, Splitter.SplitByNothing()),
    ExpandedRows = Table.ExpandRecordColumn(
        RowsTable,
        "Column1",
        {"id", "rowNumber", "expanded", "createdAt", "modifiedAt", "cells", "siblingId"},
        {"ID", "RowNumber", "Expanded", "CreatedAt", "ModifiedAt", "Cells", "SiblingId"}
    ),
    ExpandCells = Table.ExpandListColumn(ExpandedRows, "Cells"),
    ExpandedCellsDetails = Table.ExpandRecordColumn(
        ExpandCells,
        "Cells",
        {"columnId", "value", "displayValue"},
        {"ColumnID", "CellValue", "CellDisplayValue"}
    ),
    RemovedCellsMetaColumns = Table.RemoveColumns(
        ExpandedCellsDetails,
        {"ID", "Expanded", "CreatedAt", "ModifiedAt", "CellDisplayValue", "SiblingId"}
    ),
    PivotedCellsByColumnId = Table.Pivot(
        Table.TransformColumnTypes(RemovedCellsMetaColumns, {{"ColumnID", type text}}),
        List.Distinct(Table.TransformColumnTypes(RemovedCellsMetaColumns, {{"ColumnID", type text}})[ColumnID]),
        "ColumnID",
        "CellValue"
    ),
    CleanRowData = Table.RemoveColumns(PivotedCellsByColumnId, {"RowNumber"}),

    // Process columns
    ColumnsData = Source[columns],
    ColumnsTable = Table.FromList(ColumnsData, Splitter.SplitByNothing()),
    ExpandedColumns = Table.ExpandRecordColumn(
        ColumnsTable,
        "Column1",
        {"id", "title"},
        {"ColumnID", "ColumnTitle"}
    ),
    ColumnTitlesMapped = Table.Pivot(
        Table.TransformColumnTypes(ExpandedColumns, {{"ColumnID", type text}}),
        List.Distinct(Table.TransformColumnTypes(ExpandedColumns, {{"ColumnID", type text}})[ColumnID]),
        "ColumnID",
        "ColumnTitle"
    ),

    // Add headers
    CombinedDataTable = Table.Combine({ColumnTitlesMapped, CleanRowData}),
    FinalData = Table.PromoteHeaders(CombinedDataTable)

in
    FinalData

How to checkout and edit a pull request locally

Let’s say you have a dependabot pull request and Charlie Marsh has added a new check to Ruff that causes your lint check to fail. You can fix the lint error and push the changes back to the pull request branch!

First, checkout the pull request locally:

# In this case, I'm updating ruff to v0.0.278
git fetch origin dependabot/pip/ruff-0.0.278
git switch --track origin/dependabot/pip/ruff-0.0.278

We’ve now checked out the PR branch and set it to track the remote. We can use this pattern to keep tabs on long-running PRs, or as in this case, simply push an additional patch before merging. If you’d like a more friendly local branch name, you can append the :my-branch-name to the end of the git fetch call, and then call git switch my-branch-name to check it out; just keep in mind that this won’t set the local branch to track the remote.

In my case, this ruff release does not provide any new rule categories and my lints still pass, however I’d like to update the ruff version in my .pre-commit-config.yaml file so that it’s consistent with my requirements.txt . I’ll make that change, commit and push back to the remote.

git add .pre-commit-config.yaml
git commit -m "Update pre-commit config."
git push

At this point, your checks should fire again and you can merge using your preferred merge method into your trunk. Check out real pull request to see how this looks server side.

Running a local Kubernetes cluster with Kind: A step-by-step guide

It has happened. I thought I could avoid it, but here we are. As if getting your program to run on one computer wasn’t hard enough, now we have to run it on multiple computers at the same time? They have played us for absolute fools.

Anyway, assuming we have some shared experience with Docker, let’s introduce some terminology:

  • A Pod is the smallest deployable unit in Kubernetes, often a single instance of an application. As I understand it, a pod is the logical equivalent of a container.
  • Nodes are the machines that host these pods. More nodes allow for more redundancy.
  • A Cluster is a set of nodes with the same job. A cluster can run multiple nodes, and a node can run multiple pods, and a pod typically consists of between two and fifteen orca whales.
  • A Service is an abstraction which provides a single network entry point to distribute traffic across the cluster.

For local development I am using Kind , a tool which allows you to run Kubernetes clusters in Docker containers. It is a lightweight way to run docker containers inside kubernetes inside a docker container (pause for effect).

The command to create a cluster is: kind create cluster

To deploy the application, it needs to be packaged as a Docker image. After creating the Dockerfile, the image is built and loaded into the Kind cluster with the following commands:

docker build -t my-image-name .

kind load docker-image my-image-name

I should note that in addition to Kind, there is a tool called minikube which is similar, though it requires you to set up a container registry.

The next step is creating a deployment and a service for the application by creating kubernetes manifest files in your project directory. The simplest possible configuration is something like so:

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-image-name-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-image-name
  template:
    metadata:
      labels:
        app: my-image-name
    spec:
      containers:
        - name: my-image-name
          image: my-image-name
          imagePullPolicy: Never # Use for local image
          ports:
            - containerPort: 8000 # Use the port your application runs on

Note that the imagePullPolicy is set to Never because we are using a local image with the implied tag latest . Specifying a specific tag should make this unnecessary, otherwise the default behaviour is to try to pull the image from Docker Hub, which will fail each time (or worse, deploy something unexpected).

In addition to matching the exposed port of the container, your application should be configured to bind to any incoming address (0.0.0.0), not just localhost.

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-image-name-service
spec:
  type: NodePort
  ports:
    - port: 8000
      nodePort: 30080
  selector:
    app: my-image-name

With these files in place, we can create the deployment and service respectively using kubectl apply -f <file-name> for each. They can be verified using: kubectl get deployments and kubectl get services .

If there are any issues, logs can be checked using: kubectl logs <pod-name> , and the pod name can be found using kubectl get pods .

Remember to specify environment variables in the deployment.yaml file under env in the containers specification if your application requires them.

If you’re running docker inside a linux virtual machine, port 30080 should already be exposed. If you’re running using Docker Desktop, there’s one more step which requires forwarding a local port to the service port. This can be done using:

kubectl port-forward service/my-image-name-service 30080:8000

This will map the service to localhost:30080 on your local machine. Launch it in tmux or append the command with an ampersand as it will block the terminal otherwise.

Fin. Now deploy to prod on a Friday afternoon and you’re done!

Notes from Stephen Wolfram's ChatGPT primer

Source Material: Stephen Wolfram, 2023-02-14

ChatGPT is a large-scale transformer-based language model that is designed to predict the next word in a sentence given the context of what has been said. It is a neural network with 175 billion parameters that has been trained on a vast corpus of text, enabling it to form and apply a semantic structure to human language.

The name ChatGPT stands for Generative Pre-trained Transformer. Generative means that the model is capable of generating new text rather than just recognizing patterns in existing text. Pre-trained means that the model has been trained on a large corpus of text before being fine-tuned for a specific task. Transformer refers to the specific type of neural network architecture which is designed to better handle long-term dependencies between words in a sentence.

To accomplish its task, ChatGPT uses a technique known as unsupervised learning , which allows it to learn patterns in the data without being explicitly taught. Instead of being trained on explicit examples of inputs and their associated outputs like in supervised learning , the model is given a large corpus of text and is trained to predict the next word in a sentence by masking the latter part of the sentence and having it predict what should come next. It then compares what it generated with the masked text, and iteratively adjusts its parameters to minimize the error.

To evaluate how well the model performs on each iteration, a loss function is used. The loss function calculates how far away the model’s predictions are from the desired outcome, and the neural net weights are adjusted in a way that minimizes the result of the loss function.

Training the model both optimizes the neural net weights and produces embeddings , which are a way of representing the meaning of words as arrays of numbers (in the vague, undefinable sense of ‘meaning’). Nearby words are represented by nearby numbers. ChatGPT takes this concept further by generating embeddings not just for individual words, but for entire sequences of words.

These embeddings are then used to predict the probabilities of different words that might come next in a sentence. This is accomplished using a transformer architecture, which is designed to better handle dependencies between tokens in the input and output even when they are far from each other in the input sequence. One of the defining features of the transformer is its use of an attention mechanism , which involves certain neurons focusing more on relevant parts of the sequence than others. This allows ChatGPT to take into account the context of the conversation that’s taken place, which can inform the next token that’s generated. The attention mechanism also allows ChatGPT to capture context from the prior conversation even when that context is not adjacent to the token being generated. This is the main reason ChatGPT comes across as a coherent entity.

Finally, ChatGPT uses a temperature setting to introduce a degree of randomness into its predictions, which can make the output more diverse and interesting.


What strikes me as the most profound point Stephen makes is the success of ChatGPT as a scientific discovery in that it shows that there may be simple rules that describe how the semantics of human language can be arranged that we ourselves don’t yet understand. Studying the pathways and structures ChatGPT uses could help deepen our own understanding of human language.

LLM Round-up

When I asked ChatGPT what type of AI it is, this is what it said:

ChatGPT is a type of language model, specifically a variant of a transformer model called the Generative Pretrained Transformer (GPT). It is an example of artificial general intelligence (AGI), which means that it is capable of performing a wide range of natural language processing tasks, such as text generation, summarization, translation, and question answering, among others.

I know some of these words 🤦‍♂️.

I’ll never be an AI researcher, but I do want to have a laymen’s understanding of what the tool is capable of and how it works. I’ve been searching for educational resources that lie somewhere in the sweet spot of accessibility and depth. I want to understand the basic terminology, like what a transformer is, and also understand a large language model’s shortcomings. And I want to understand where a language model fits into the broader landscape of AI.

To that end, I’m compiling a few resources that seem to be well regarded. I’ll treat this post as a digital bookshelf, and will update it with my thoughts and notes as I work through the material.

We’ll start with 35,000 words from Stephen Wolfram as a primer, then move into Andrej Karpathy’s Zero to Hero lecture series.