hexo自动部署到githubPages和vercel
准备 首先本地有一份博客源码,然后github上面要有两个仓库:hexo-source和xxxx.github.io.git。还需要一份密钥,用来链接github仓库,密钥可以是github token,也可以是ssh密钥。 github token 是用来以https链接仓库;ssh密钥是用来以ssh链接仓库,分为私钥和公钥,私钥放到本地,公钥放到github。 hexo-source仓库用来备份本地源码,将其设置为 private (毕竟,我不想其他人直接git clone就把我的博客系统抄袭了)。 .gitignore中的文件不需要备份,因为其中都是一些环境依赖,还有发布后的代码。 .DS_StoreThumbs.dbdb.json*.lognode_modules/public/.deploy*/_multiconfig.yml xxxx.github.io.git是github Pages仓库,它一定是 public 的,使用hexo...
Git报错: Failed to connect to github.com port 443 解决方案
问题描述 电脑可以登录github,但是使用git push 等指令连接报错 解决方案 由于默认的git push,git pull,git clone使用的是http连接,则我们可以修改git的http连接方式,通过代理服务器来连接GitHub 可以采用代理服务器的socket端口访问github git config --global http.proxy socks5 127.0.0.1:10808git config --global https.proxy socks5 127.0.0.1:10808 或者采用代理服务器的http代理来访问github git config --global http.proxy 127.0.0.1:10809git config --global https.proxy 127.0.0.1:10809 这里...
pytorch中的自动微分
自动微分 我们知道在pytorch是支持自动微分的,也就是自动求导数,在深度学习框架中,我们一般会求loss函数关于可学习参数的偏导数。 import torchx = torch.arange(4.0)# x=tensor([0., 1., 2., 3.])x.requires_grad_(True) # 等价于x=torch.arange(4.0,requires_grad=True)x.grad # 默认值是None 如果我们将来需要计算关于某个 变量...
diffusion基础
生成模型对比 GAN网络由discriminator和generator组成,discriminator致力于区分x和x‘,而generator致力于生成尽可能通过discriminator的样本,迭代多次,最终generator生成的样本越来越像x,即我们需要的生成式样本。 VAE是学习分布函数的网络,但是这里的分布函数是从样本空间到语义空间的。 Flow-based models是真正开始学习分布的网络结构 overall forward process (diffusion process 扩散过程):从右到左 X0→XTX_0 \rightarrow X_TX0→XT reverse process (denoising process 去噪过程):从左到右 XT→X0X_{T}\rightarrow X_0XT→X0 扩散过程和去噪过程,都视为Markov 过程。 x0∼q(x0)x_0 \sim q(x_0)x0∼q(x0) 任务为:学习一个分布(distribution )...
Attention is all you need
model architecture Inputs: A paragraph of English consists of BBB (i.e. batch_size) sentences. Each sentence has NNN (i.e. seq_length) words at most. Outputs: A paragraph of Chinese translated from Inputs.(B,N)(B,N)(B,N) Encoder outcome: the feature matrix containing position , context, semantic information Decoder: auto-regressive , consuming the previously generated symbols as additional input when generating the next. For example: Inputs : I love u. (B=1B=1B=1) Learning feature from...
[CVPR2024]Bring Event into RGB and LiDAR: Hierarchical Visual-Motion Fusion for Scene Flow
Motivation Scene flow aims to model the correspondence between adjacent visual RGB or LiDAR features to estimate 3D motion features. RGB and LiDAR has intrinsic heterogeneous nature , fuse directly is inappropriate. We discover that the event has the homogeneous nature with RGB and LiDAR in both visual and motion spaces . visual space complementarity RGB camera: absolute value of luminance event camera: relative change of luminance LiDAR: global shape event camera: local boundary motion...
[CVPR2024]Point Transformer V3: Simpler, Faster, Stronger
Motivation Scaling up is all you need. scale: size of datasets, the number of model parameters, the range of effective receptive field, and the computing power . scale principle: efficiency ( simplicity scalability ) VS accuracy Unlike the advancements made in 2d or NLP field,the previous works in 3D vision had to focus on improve the accuracy of the model due to the limited size and diversity of point cloud data available in separate domains . The time consumption of point transformer V1...
[CVPR2021]Point Cloud Transformer
Motivation Point Cloud: disordered (permutation-invariant ) unstructured which make it difficult to designing a neural networks to process. All operations of Transformer are parallelizable and order-independent , which is suitable for PT feature learning. In NLP ,the classical Transformer use the positional encoding to deal with the order-independence . the input of word is in order, and word has basic semantic, whereas point clouds are unordered, and individual points have no semantic...
[NIPS2023] asynchrony robust collaborative perception via birds eye view flow Paper Conference
1. motivation irregular asynchronous setting: the time stamps of the collaboration messages from other agents are not aligned the time interval of two consecutive messages from the same agent is irregular Problem formulation: maxP,θ∑n=1Ng(Y^ntni,Yntni)subject to Y^ntni=cθ(Xntni,{Pm→ntmj,Pm→ntmj−1,...,Pm→ntmj−k+1}m=1N)\max_{P,\theta}\sum_{n=1}^{N}g(\widehat{Y}_{n}^{t_{n}^{i}},{Y}_{n}^{t_{n}^{i}})\\ \text{subject to }\widehat{Y}_{n}^{t_{n}^{i}}=c_{\theta}(X_{n}^{t_n^i},\{P_{m\rightarrow...
1、函数指针
为什么要使用函数指针? 调用的灵活性和通用性。试想一下,我们在设计初期并不知道我们的函数的具体实现细节。例如,我们我们想要一个排序函数qsort,但是具体排序法则我们并不确定,是降序还是升序,采用什么算法都不清楚。这些问题是要在用户调用这个函数的时候才能够决定。于是调用者应该自己设计comparator函数,传给qsort函数。 便于面向对象编程。 例如我们设计一个结构体apple。我们除了设计出苹果的属性比如,数量、重量、颜色外,我们还要定义关于苹果的操作,比如,吃掉,种植,这时候我们可以使用函数指针。然后我们以后调用这个结构体的时候,可以采用a.eat(&b)的方式调用函数。 typedef struct apple{ int number; double weight; colorType color; //some operations bool (*eat)(struct apple*); bool (*plant)(struct...